COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Applied and Computational Analysis > The Algorithmic Transparency Requirement
The Algorithmic Transparency RequirementAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Matthew Colbrook. Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. We derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. Finally, we exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital computing models represented by Turing machines. This talk is part of the Applied and Computational Analysis series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsPhilosophy and History of Science Confirm List Here CRASSH eventsOther talksTBC LMB Seminar: Laser phase plate in cryo-EM and cryo-ET The role of radiation in cancer care: a spotlight on cancers of the oesophagus, head and neck Small doubling in a free group Bayesian Deep Learning for Galaxy Classification In Conversation: Cultures of Enchantment |