BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Explainability can foster trust in artificial intelligence in geos
 cience and disaster risk management  - Saman Ghaffarian\, UCL
DTSTART:20250514T130000Z
DTEND:20250514T140000Z
UID:TALK231937@talks.cam.ac.uk
CONTACT:Lisanne Blok
DESCRIPTION:Artificial Intelligence (AI) is transforming many fields\, inc
 luding geosciences and disaster risk management\, by offering powerful too
 ls for analysing complex systems and supporting critical decision-making p
 rocess. However\, as the complexity and potentially the predictive skill o
 f an AI model increases\, its interpretability — the ability to understa
 nd the model and its predictions from a physical perspective — may decre
 ase. In critical situations\, such as scenarios caused by natural hazards\
 , the resulting lack of understanding of how a model works and consequent 
 lack of trust in its results can become a barrier to its implementation.\n
 \nThis talk focuses on the emerging field of Explainable AI (XAI)\, which 
 enhances the human-comprehensible understanding and interpretation of opaq
 ue ‘black-box’ AI models\, can build trust in AI model results\, and e
 ncourage greater adoption of AI methods in geoscience and disaster risk ma
 nagement. Drawing on recent research\, it highlights how XAI can enhance t
 he adoption of AI in this field and outlines key challenges\, current tren
 ds\, and promising directions for future integration of XAI into geoscienc
 e and disaster risk management applications. 
LOCATION:Wolfson Lecture Theatre\, Bullard Laboratories
END:VEVENT
END:VCALENDAR
