University of Cambridge > > CamPoS (Cambridge Philosophy of Science) seminar > Explanations for medical artificial intelligence

Explanations for medical artificial intelligence

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matt Farr.

(Joint work with Diana Robinson)

AI systems are currently being developed and deployed for a variety medical purposes. A common objection to this trend is that medical AI systems risk being ‘black-boxes’, unable to explain their decisions. How serious this objection is remains unclear. As some commentators point out, human doctors too are often unable to properly explain their decisions. In this paper, we seek to clarify this debate. We (i) analyse the reasons why explainability is important for medical AI, (ii) outline some of the features that make for good explanations in this context, and (iii) compare how well humans and AI systems are able to satisfy these. We conclude that while humans currently have the edge, recent developments in technical AI research may allow us to construct medical AI systems which are better explainers than humans.

This talk is part of the CamPoS (Cambridge Philosophy of Science) seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity