COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > CamPoS (Cambridge Philosophy of Science) seminar > Explanations for medical artificial intelligence
Explanations for medical artificial intelligenceAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Matt Farr. (Joint work with Diana Robinson) AI systems are currently being developed and deployed for a variety medical purposes. A common objection to this trend is that medical AI systems risk being ‘black-boxes’, unable to explain their decisions. How serious this objection is remains unclear. As some commentators point out, human doctors too are often unable to properly explain their decisions. In this paper, we seek to clarify this debate. We (i) analyse the reasons why explainability is important for medical AI, (ii) outline some of the features that make for good explanations in this context, and (iii) compare how well humans and AI systems are able to satisfy these. We conclude that while humans currently have the edge, recent developments in technical AI research may allow us to construct medical AI systems which are better explainers than humans. This talk is part of the CamPoS (Cambridge Philosophy of Science) seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsConfronting History, the Archive and the 'Stranger' in Educational Research Communications Research Group Seminar Centre of South Asian Studies SeminarsOther talksDo conservation efforts aimed at slowing deforestation and improving local wellbeing work - and how do we know? Multivalent interactions as a source of specificity and regulation in RNP assembly Electric Power Distribution in the World: Today and Tomorrow Art speak Creating citizen history of science: science, fiction and the future of the 20th century 200TH ANNIVERSARY TWO-DAY MEETING - The Futures of Sciences |