University of Cambridge > > NLIP Seminar Series > Knowledge Issues and Language Models

Knowledge Issues and Language Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Michael Schlichtkrull.


As we continue to push the boundaries of natural language processing, it becomes imperative to better understand how language models interact with, incorporate, and are influenced by knowledge. In this talk, I will navigate the complex interaction between language models and knowledge.

I’ll begin by presenting a forthcoming ACL paper, studying fact verification against knowledge graphs. This demonstrates the ability to ground language model outputs against structured information sources. Following this, I’ll discuss ongoing research in multi-hop multi-set retrieval settings, and then explore how the abundance of knowledge available can potentially undermine retrieve-and-reason architectures. Lastly, I’ll put forward a hypothesis about the role of knowledge in language models and consider the potential advantages of combining generative and retrieval-based NLP approaches.


James is Assistant Professor at the KAIST Graduate School of AI, South Korea, working on large-scale and knowledge-intensive natural language understanding. James recently completed his PhD at the University of Cambridge where he developed models and methods for automated fact verification and correction.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity