University of Cambridge > Talks.cam > Frontiers in Artificial Intelligence Series > Interpretability in Machine Learning: What it means, How we're getting there

Interpretability in Machine Learning: What it means, How we're getting there

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

 Please note, this event may be recorded. Microsoft will own the copyright of any recording and reserves the right to distribute it as required.

As machine learning systems become ubiquitous, there is a growing interest in interpretable machine learning—that is, systems that can provide human-interpretable rationale for their predictions and decisions. In this talk, I’ll first give two examples of real healthcare settings—mortality modeling in the ICU and treatment response in major depression—where the ability to interpret learned models is essential, and describe how we built models to meet those needs. Next, I’ll speak about some of the work we are doing to understand interpretability more broadly: what exactly makes a model interpretable? And can we optimize for it? By formalizing these notions, we can hope to identify universals of interpretability and also rigorously compare different kinds of systems for producing algorithmic explanations. Includes joint work with Been Kim, Andrew Ross, Mike Wu, Michael Hughes, Menaka Narayanan, Sam Gershman, Emily Chen, Jeffrey He, Isaac Lage, Roy Perlis, Tom McCoy, Gabe Hope, Leah Weiner, Erik Sudderth, Sonali Parbhoo, Marzyeh Ghassemi, Pete Szolovits, Mornin Feng, Leo Celi, Nicole Brimmer, Tristan Naumann, Rohit Joshi, Anna Rumshisky, and the Berkman Klein Center.

This talk is part of the Frontiers in Artificial Intelligence Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity