COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Variational Smoothing in Recurrent Neural Network Language Models
Variational Smoothing in Recurrent Neural Network Language ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Qianchu Liu. In this talk, we present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution). We use this insight to propose a more principled method to apply at prediction time and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output embedding matrices and an element-wise variational smoothing method. We empirically verify our analysis on two bench-mark language modeling datasets and demonstrate performance improvements over existing data noising methods. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsThe obesity epidemic: Discussing the global health crisis Ffion Hague - 'The Women in Lloyd George's Life Trinity Hall ForumOther talksPOSTPONED: Social Power and Mental Health: Evolving Research Through Lived Experience (25-26 March 2020) CANCELLED: Alex Hopkins Lecture: the joy of discovery CANCELLED - Ambition without limits: Women in STEM |