Explainability can foster trust in artificial intelligence in geoscience and disaster risk management
- π€ Speaker: Saman Ghaffarian, UCL
- π Date & Time: Wednesday 14 May 2025, 14:00 - 15:00
- π Venue: Wolfson Lecture Theatre, Bullard Laboratories
Abstract
Artificial Intelligence (AI) is transforming many fields, including geosciences and disaster risk management, by offering powerful tools for analysing complex systems and supporting critical decision-making process. However, as the complexity and potentially the predictive skill of an AI model increases, its interpretability β the ability to understand the model and its predictions from a physical perspective β may decrease. In critical situations, such as scenarios caused by natural hazards, the resulting lack of understanding of how a model works and consequent lack of trust in its results can become a barrier to its implementation.
This talk focuses on the emerging field of Explainable AI (XAI), which enhances the human-comprehensible understanding and interpretation of opaque βblack-boxβ AI models, can build trust in AI model results, and encourage greater adoption of AI methods in geoscience and disaster risk management. Drawing on recent research, it highlights how XAI can enhance the adoption of AI in this field and outlines key challenges, current trends, and promising directions for future integration of XAI into geoscience and disaster risk management applications.
Series This talk is part of the Bullard Laboratories Wednesday Seminars series.
Included in Lists
- Bullard Laboratories Wednesday Seminars
- Department of Earth Sciences seminars
- Wolfson Lecture Theatre, Bullard Laboratories
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Saman Ghaffarian, UCL
Wednesday 14 May 2025, 14:00-15:00