What is interpretable machine learning? How can we make algorithms accountable for their decisions? How can we better explain how AI works in critical situations?
It’s clear that as AI is playing a larger and larger role in our lives, we face situations where we need to explain how algorithms are making certain decisions. This is especially important in domains like law, medicine and autonomous vehicles.
On this webinar we are talking about this very important topic. We explain why we need explainable AI, some techniques that can be used for explainable AI and what decision makers can do in order to make sure their algorithms remain transparent.
Watch the webinar here.