COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Cambridge Technology & New Media Research Cluster > Human Values and Explainable Artificial Intelligence
Human Values and Explainable Artificial IntelligenceAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Tellef S. Raabe. A common objection to the use of artificial intelligence in decision-making is the concern that it is often difficult to explain or understand how AI systems make decisions. There is a growing body of technical AI research developing techniques for making AI more “explainable” or “interpretable”. However, it is still not well understood why this is an important property for an AI system to possess, or what types of explanations are most important. While there are empirical studies of which types of explanations individuals subjected to AI decision-making find satisfactory, psychological evidence suggests people’s sense of understanding is often unreliable and easy to manipulate. In this paper, I argue that a pragmatist account of explanation provides a fruitful framework for exploring the problem of AI Explainability, which allows us to combine normative and empirical perspectives on user values. This talk is part of the Cambridge Technology & New Media Research Cluster series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsCambridge University Computing and Technology Society (CUCaTS) Interfacial Studies on a Charged Surface in Oil by Contact Angle Measurements TCM Blackboard SeriesOther talksAct of State and the Limits of Adjudication Engineering Bioinspired Molecular Networks and Synthetic Cells Computational Drug Discovery: Present and Future The story of a theorem Buoyancy-driven flows and mixing: an energetics perspective |