BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Towards Human-Centered Explanations of AI Predictions - Chenhao Ta
 n\, University of Chicago
DTSTART:20230601T150000Z
DTEND:20230601T160000Z
UID:TALK201937@talks.cam.ac.uk
CONTACT:Panagiotis Fytas
DESCRIPTION:Explanations of AI predictions are considered crucial for huma
 n-AI interactions. I argue that successful human-AI interactions require t
 wo steps: AI explanation and human interpretation. Therefore\, effective e
 xplanations necessitates the understanding of human interpretation. In thi
 s talk\, I will present our work to address this challenge through human-c
 entered evaluation and generation of explanations. First\, I will discuss 
 the distinction between emulation and discovery tasks\, which shapes human
  interpretation. In emulation tasks\, humans provide groundtruth labels an
 d the goal of AI is to emulate human intelligence. While it may seem intui
 tive that humans can provide valid explanations in this case\, I argue tha
 t humans may not be able to provide "good" explanations. Caution is thus r
 equired to use human explanations for evaluation or as supervision signals
  despite the growing efforts in building datasets of human explanations. I
 n contrast\, in discovery tasks\, humans may not necessarily know the grou
 ndtruth label. Human-subject experiments show that explanations fail to im
 prove human decisions\, namely\, human+AI rarely outperforms AI alone. I w
 ill highlight the importance of identifying human strengths and AI strengt
 hs\, and introduce decision-focused summarization. Finally\, I will discus
 s recent work on leveraging explanations to improve AI models.\n
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
