COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > NLIP Seminar Series > Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction
Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity PredictionAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Michael Schlichtkrull. Abstract: We present Claim-Dissector: a novel latent variable model for fact-checking and fact-analysis, which given a claim and a set of retrieved provenances allows learning jointly (i) what are the provenances relevant to this claim (ii) what is the veracity of this claim. We show that our system achieves state-of-the-art results on FEVER comparable to two-stage systems often used in traditional fact-checking pipelines, while using significantly less parameters and computation. Our analysis shows that proposed approach further allows to learn not just which provenances are relevant, but also which provenances lead to supporting and which toward denying the claim, without direct supervision. This not only adds interpretability, but also allows to detect claims with conflicting evidence automatically. Furthermore, we study whether our model can learn fine-grained relevance cues while using coarse-grained supervision. We show that our model can achieve competitive sentence-recall while using only paragraph-level relevance supervision. Finally, traversing towards the finest granularity of relevance, we show that our framework is capable of achieving strong token-level interpretability. To do this, we present a new benchmark focusing on token-level interpretability ― humans annotate tokens in relevant provenances they considered essential when making their judgement. Then we measure how similar are these annotations to tokens our model is focusing on. Our code, dataset and demo will be released online. Bio: Martin Fajčík (read as Fay-Cheek) is a PhD candidate in Natural Language Processing from Knowledge Technology Research Group active at FIT -BUT in Brno, Czech Republic, advised by prof. Pavel Smrž (ž is read like j in french “Jean”). From 2021, he also works as a research assistant in IDIAP research institute based in Martigny, Switzerland. His PhD work is focusing on open-domain knowledge processing, mainly in question answering and fact-checking. He enjoys a good hikes and an informal discussions over tea. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCost Of Flight In Nigeria Biocomputing Workshops Cambridge University Global Health Student InitiativeOther talksWave Breaking in Undular Bores The Challenges of Controlling a Quantum Computer TBA CN: A separation-logic refinement type system for production systems code verification Wrestling with real systems Emulation and Uncertainty Quantification in Cardiac Modelling |