University of Cambridge > Talks.cam > Language Technology Lab Seminars > Towards Human-Centered Explanations of AI Predictions

Towards Human-Centered Explanations of AI Predictions

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

Explanations of AI predictions are considered crucial for human-AI interactions. I argue that successful human-AI interactions require two steps: AI explanation and human interpretation. Therefore, effective explanations necessitates the understanding of human interpretation. In this talk, I will present our work to address this challenge through human-centered evaluation and generation of explanations. First, I will discuss the distinction between emulation and discovery tasks, which shapes human interpretation. In emulation tasks, humans provide groundtruth labels and the goal of AI is to emulate human intelligence. While it may seem intuitive that humans can provide valid explanations in this case, I argue that humans may not be able to provide “good” explanations. Caution is thus required to use human explanations for evaluation or as supervision signals despite the growing efforts in building datasets of human explanations. In contrast, in discovery tasks, humans may not necessarily know the groundtruth label. Human-subject experiments show that explanations fail to improve human decisions, namely, human+AI rarely outperforms AI alone. I will highlight the importance of identifying human strengths and AI strengths, and introduce decision-focused summarization. Finally, I will discuss recent work on leveraging explanations to improve AI models.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity