University of Cambridge > Talks.cam > Language Technology Lab Seminars > Latent Variable Models for Text Generation

Latent Variable Models for Text Generation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Edoardo Maria Ponti.

Latent variable models provide an effective way to specify prior knowledge and uncover the intermediate decision process of natural language generation. In this talk, we will go through two specific applications. The first one incorporates latent continuous variables into a dialogue generation model. The latent variable is trained to maximize the mutual information with neighboring utterances. We show the latent variable component is able to significantly enhance the connection between the generated response and its surrounding context, leading to a more engaging human-machine conversation. The second one explicitly models the content selection process with discrete latent variables. By lowering down the training variance with a variational autoencoder objective, the model is able to successfully decouple content selection from the black-box generation model on both sentence compression and data-to-text tasks, enabling us to control the content selection in an interpretable way.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity