University of Cambridge > Talks.cam > CHIA Seminar Talks: AI for Social and Global Good > Trustworthy and Responsible Machine Learning

Trustworthy and Responsible Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

Machine learning-based AI systems are experiencing widespread adoption across various domains, producing a profound influence on daily lives, industries, scientific research, and beyond. Ensuring the safety of these AI systems, particularly in high-stakes real-world applications, stands as an imperative priority. Recent landmark events, such as the first global AI safety summit, have underscored the critical significance of AI safety.

Trustworthy and responsible machine learning has gained significant attention from governments, industries, and scientific communities alike. It is recognized as an essential component and fundamental pillar in the pursuit of AI safety objectives. This presentation will briefly cover some noteworthy limitations in current AI systems, such as opaqueness, bias, fragility, privacy invasion. Subsequently, it will focus on the technical dimensions of trustworthy and responsible machine learning, exploring measures and techniques designed to enhance transparency, interpretability, robustness, privacy protection. While this presentation may not offer an exhaustive and comprehensive overview, hopefully, it aspires to provide researchers and users with some insights, and to advocate a more prudent utilization of AI technology, considering that it is crucial for users not only to harness the benefits AI offers but also to mitigate its potential harm and risks.

This talk is part of the CHIA Seminar Talks: AI for Social and Global Good series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity