University of Cambridge > Talks.cam > NLIP Seminar Series > Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction

Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction

Add to your list(s) Download to your calendar using vCal

  • UserMartin Fajčík ( Brno University of Technology ) World_link
  • ClockTuesday 12 July 2022, 14:00-15:00
  • HouseComputer Lab, FW26.

If you have a question about this talk, please contact Michael Schlichtkrull.

Abstract:

We present Claim-Dissector: a novel latent variable model for fact-checking and fact-analysis, which given a claim and a set of retrieved provenances allows learning jointly (i) what are the provenances relevant to this claim (ii) what is the veracity of this claim. We show that our system achieves state-of-the-art results on FEVER comparable to two-stage systems often used in traditional fact-checking pipelines, while using significantly less parameters and computation. Our analysis shows that proposed approach further allows to learn not just which provenances are relevant, but also which provenances lead to supporting and which toward denying the claim, without direct supervision. This not only adds interpretability, but also allows to detect claims with conflicting evidence automatically. Furthermore, we study whether our model can learn fine-grained relevance cues while using coarse-grained supervision. We show that our model can achieve competitive sentence-recall while using only paragraph-level relevance supervision. Finally, traversing towards the finest granularity of relevance, we show that our framework is capable of achieving strong token-level interpretability. To do this, we present a new benchmark focusing on token-level interpretability ― humans annotate tokens in relevant provenances they considered essential when making their judgement. Then we measure how similar are these annotations to tokens our model is focusing on. Our code, dataset and demo will be released online.

Bio:

Martin Fajčík (read as Fay-Cheek) is a PhD candidate in Natural Language Processing from Knowledge Technology Research Group active at FIT -BUT in Brno, Czech Republic, advised by prof. Pavel Smrž (ž is read like j in french “Jean”). From 2021, he also works as a research assistant in IDIAP research institute based in Martigny, Switzerland. His PhD work is focusing on open-domain knowledge processing, mainly in question answering and fact-checking. He enjoys a good hikes and an informal discussions over tea.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity