Neural Network Language Modelling
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
This talk will be an application-oriented review of recent research in language modelling with neural network. In this talk, basic concepts about language model will be first discussed. I will then review the architecture and training phase of two different types of neural-network language models (NNLM): Feed-forward NNLM and Recurrent NNLM . Their direct applications and potential issues will also be talked. Besides, one by-product given by NNLM is the low-dimension presentation of word in continue space. Latest research about it will be investigated as well.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|