University of Cambridge > Talks.cam > Cambridge University Linguistic Society (LingSoc) > Learning Syntax with Deep Neural Networks

Learning Syntax with Deep Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Giulia Bovolenta.

Joint work with Jean-Philippe Bernardy, University of Gothenburg

We consider the extent to which different deep neural network (DNN) configurations can learn syntactic relations, by taking up Linzen et al.’s (2016) work on subject-verb agreement with LSTM RN Ns. We test their methods on a much larger corpus than they used (a 24 million example part of the WaCky corpus, instead of their ~1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, memory size, and drop out rate). We also try out our own unsupervised DNN language model. Our results are broadly compatible with those that Linzen et al. report. However, we discovered some interesting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has significant consequences for our understanding of the way in which DNNs represent syntactic information. We also achieved significantly better accuracy with our language model for unsupervised prediction of agreement than Linzen et al. report in their LM experiments.

This talk is part of the Cambridge University Linguistic Society (LingSoc) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity