University of Cambridge > Talks.cam > NLIP Seminar Series > Zero-shot Language Learning through Bayesian Neural Models

Zero-shot Language Learning through Bayesian Neural Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Thorne.

A key feature of general linguistic intelligence is the ability to generalise to new domains with sample efficiency, i.e. based on zero or few examples. This goal is of great practical importance, as most combinations of tasks and languages lack in-domain examples for supervised training. A possible solution consists in biasing the induction of neural models towards unseen languages by constructing an informative prior over parameters (imbued with universal linguistic knowledge) from seen languages. Subsequently, MAP inference allows for learning efficiently from few in-domain examples by leveraging the prior information. Another solution resorts to estimating parameters for unseen task-language combinations based on seen combinations. In particular, the space of neural parameters can be factorized into its constituent latent variables, one for each task and language. The posteriors over these latent variables can be learned through stochastic Variational Inference. In this talk, I will argue that these approaches yield comparable or better performance than state-of-the-art zero-shot transfer methods for character-level language modelling, POS tagging and Named Entity Recognition over a wide and typologically diverse sample of languages. What is more, the Bayesian treatment of the inference problem allows for quantifying the uncertainty of each prediction, which is especially valuable in settings characterised by distribution shifts such as zero-shot transfer.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity