University of Cambridge > Talks.cam > NLIP Seminar Series > Learning Hierarchical Word and Sentence Representations

Learning Hierarchical Word and Sentence Representations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kris Cao.

Languages encode meaning in terms of hierarchical, nested structures. For example, we often found coarse-to-fine organization of words’ meanings in the field of lexical semantics (e.g., WordNet); and relationships among words in a sentence are largely organized in terms of latent nested structures (Chomsky, 1957). In this talk, I will first discuss how to incorporate hierarchical prior knowledge into a word representation model. I will show how to use regularizers to encourage hierarchical organization of the latent dimensions of vector-space word embeddings.

I will then talk about a reinforcement learning method to learn tree-structured neural networks for computing representations of natural language sentences. In contrast to sequential RNNs which ignore tree structure, our model generates a latent tree for each sentence using a reward from a semantic interpretation task to syntactically structure the composition. I will show that learning how words compose to form sentence meanings leads to better performance on various downstream tasks.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity