University of Cambridge > Talks.cam > NLIP Seminar Series > Towards explainable fact checking

Towards explainable fact checking

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Guy Aglionby.

Automatic fact checking is one of the more involved NLP tasks currently researched: not only does it require sentence understanding, but also an understanding of how claims relate to evidence documents and world knowledge. Moreover, there is still no common understanding in the automatic fact checking community of how the subtasks of fact checking — claim check-worthiness detection, evidence retrieval, veracity prediction — should be framed. This is partly owing to the complexity of the task, despite efforts to formalise the task of fact checking through the development of benchmark datasets. The first part of the talk will be on automatically generating textual explanations for fact checking, thereby exposing some of the reasoning processes these models follow. The second part of the talk will be on re-examining how claim check-worthiness is defined, and how check-worthy claims can be detected; followed by how to automatically generate claims which are hard to fact-check automatically.

Bio:

Isabelle Augenstein is an associate professor in Natural Language Processing and Machine Learning at the University of Copenhagen, Department of Computer Science, where she head the Copenhagen NLU research group. Her main research interests are weakly supervised and low-resource learning with applications including fact checking, question answering and cross-lingual learning.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity