University of Cambridge > Talks.cam > Bullard Laboratories Wednesday Seminars > Explainability can foster trust in artificial intelligence in geoscience and disaster risk management

Explainability can foster trust in artificial intelligence in geoscience and disaster risk management

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Lisanne Blok.

Artificial Intelligence (AI) is transforming many fields, including geosciences and disaster risk management, by offering powerful tools for analysing complex systems and supporting critical decision-making process. However, as the complexity and potentially the predictive skill of an AI model increases, its interpretability — the ability to understand the model and its predictions from a physical perspective — may decrease. In critical situations, such as scenarios caused by natural hazards, the resulting lack of understanding of how a model works and consequent lack of trust in its results can become a barrier to its implementation.

This talk focuses on the emerging field of Explainable AI (XAI), which enhances the human-comprehensible understanding and interpretation of opaque ‘black-box’ AI models, can build trust in AI model results, and encourage greater adoption of AI methods in geoscience and disaster risk management. Drawing on recent research, it highlights how XAI can enhance the adoption of AI in this field and outlines key challenges, current trends, and promising directions for future integration of XAI into geoscience and disaster risk management applications.

This talk is part of the Bullard Laboratories Wednesday Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity