CWIC 2019 - Early Bird Tickets

Don't miss out on our Early Bird offer on CWIC tickets! Discounted prices only available until 30 April

Resources

Interpretability in Machine Learning

Interpretability is often considered crucial for enabling effective real-world deployment of intelligent systems. Unlike performance measures such as accuracy, objective measurement criteria for interpretability are difficult to identify. The volume of research on interpretability is rapidly growing. However, there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for most of these issues to be rigorously defined and activated. One of the taxonomies of interpretability in ML includes global and local interpretability algorithms. The former aims at getting a general understanding of how the system is working as a whole, and at knowing what patterns are present in the data. On the other hand, local interpretability provides an explanation of a particular prediction or decision. Here, we shed light on issues related to interpretability, as well as state-of-the-art machine learning algorithms.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search