University of Cambridge > > NLIP Seminar Series > Incorporating Structure into NLP Models with Graph Neural Networks

Incorporating Structure into NLP Models with Graph Neural Networks

Add to your list(s) Download to your calendar using vCal

  • User Michael Schlichtkrull (University of Cambridge)
  • ClockFriday 14 May 2021, 12:00-13:00
  • HouseVirtual (Zoom).

If you have a question about this talk, please contact Huiyuan Xie.

Many of the most interesting NLP applications require modelling various structured sources in addition to text. In this talk, I will discuss how such structured data can be incorporated into neural NLP models with graph neural networks. In the first part, I will give a brief introduction to the subject, and talk about our results on modelling knowledge bases for link prediction and question answering. I will also discuss the correspondence to transformers, along with some recent results on modelling tables for fact verification. The second part of my talk will be about interpreting what these models learn. Graph neural networks are complex and highly nonlinear models. They can help models benefit from structure, but it can be difficult to understand which structures are useful, how exactly the model uses the structure, and why specific decisions are made. I will talk about our recent work on GraphMask, an interpretability technique for graph neural networks. GraphMask produces rationales explaining which parts of a graph a given model relies on, both for individual examples and on the dataset level.

Join Zoom Meeting

Meeting ID: 914 0934 9297 Passcode: 612874

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity