University of Cambridge > > Machine Learning @ CUED > Global model explainability via aggregation

Global model explainability via aggregation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

Current approaches for explaining machine learning models fall into two distinct classes: antecedent event influence and value attribution. The former leverages training instances to describe how much influence a training point exerts on a test point, while the latter attempts to attribute value to the features most pertinent to a given prediction. In this talk, I will discuss my work, AVA: Aggregate Valuation of Antecedents, that fuses these two explanation classes to form a new approach to feature attribution that not only retrieves local explanations but also captures global patterns learned by a model. We find that aggregating and weighting Shapley value explanations via AVA results in a valid Shapley value explanation. I will provide a medical use case for AVA explanations, mirroring diagnostic approaches used by healthcare professionals.

I will also discuss new heuristics and show preliminary results for aggregating local explanations from different explanation techniques using a wisdom of the crowds approach subject to a user specified criterion.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity