COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Interpretability in Machine Learning
Interpretability in Machine LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Alessandro Davide Ialongo. Abstract: Interpretability is often considered crucial for enabling effective real-world deployment of intelligent systems. Unlike performance measures such as accuracy, objective measurement criteria for interpretability are difficult to identify. The volume of research on interpretability is rapidly growing (more than 20,000 publications related to interpretability in ML in the last five years can be found through Google Scholar). However, there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for most of these issues to be rigorously defined and activated. Recent European Union regulation will require algorithms that make decisions based on user-level predictors, which significantly affect users to provide explanation (“right to explanation”) by 2018 (GDPR). One of the taxonomies of interpretability in ML includes global and local interpretability algorithms. The former aims at getting a general understanding of how the system is working as a whole, and at knowing what patterns are present in the data. On the other hand, local interpretability provides an explanation of a particular prediction or decision. We take a look here at two algorithms each belonging to one of the aforementioned categories. The prediction difference analysis method presents an algorithm for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. We also check an algorithm that facilitates human understanding and reasoning of a dataset via learning prototypes and criticism. The method is referred to as MMD -critic, and it is motivated by the Bayesian model criticism framework. Recommended reading:
This talk is part of the Machine Learning Reading Group @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsDobson Group - General Interest Centre for Commonwealth Education (CCE) Department of History and Philosophy of ScienceOther talksOrganoid systems to study the maternal-fetal dialogue of early pregnancy Reconciling centennial-scale climate variation during the last millennium in reconstructions and simulations Development of a Broadly-Neutralising Vaccine against Blood-Stage P. falciparum Malaria All-resolutions inference for brain imaging The Beginning of Our Universe and what we don't know about Physics Tunable Functional Magnetic Skyrmions at Room Temperature |