![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Bullard Laboratories Wednesday Seminars > Explainability can foster trust in artificial intelligence in geoscience and disaster risk management
Explainability can foster trust in artificial intelligence in geoscience and disaster risk managementAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Lisanne Blok. Artificial Intelligence (AI) is transforming many fields, including geosciences and disaster risk management, by offering powerful tools for analysing complex systems and supporting critical decision-making process. However, as the complexity and potentially the predictive skill of an AI model increases, its interpretability — the ability to understand the model and its predictions from a physical perspective — may decrease. In critical situations, such as scenarios caused by natural hazards, the resulting lack of understanding of how a model works and consequent lack of trust in its results can become a barrier to its implementation. This talk focuses on the emerging field of Explainable AI (XAI), which enhances the human-comprehensible understanding and interpretation of opaque ‘black-box’ AI models, can build trust in AI model results, and encourage greater adoption of AI methods in geoscience and disaster risk management. Drawing on recent research, it highlights how XAI can enhance the adoption of AI in this field and outlines key challenges, current trends, and promising directions for future integration of XAI into geoscience and disaster risk management applications. This talk is part of the Bullard Laboratories Wednesday Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsClinical Science Seminars Engineering Department Geotechnical Research Seminars Department of Materials Science & Metallurgy Seminar SeriesOther talksA motivic approach to efficient generation of projective modules Explainable AI in Neuroscience: From Interpretability to Biomarker Discovery Flora & Fauna of New Zealand (part 1) Christmas Members' Evening & Annual General Meeting Welcome Reception How crustal exhumation rates determine the fate of porphyry copper deposits |