University of Cambridge > > Applied and Computational Analysis > The Algorithmic Transparency Requirement

The Algorithmic Transparency Requirement

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matthew Colbrook.

Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. We derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. Finally, we exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital computing models represented by Turing machines.

This talk is part of the Applied and Computational Analysis series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity