COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Learning to Create and Reuse Words in Open-Vocabulary Language Modeling
Learning to Create and Reuse Words in Open-Vocabulary Language ModelingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dimitri Kartsaklis. Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the “bursty” distribution of such words. In this talk, we discuss a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus; MWC ) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsFestival of the Annunciation, Fitzwilliam Museum Structural Materials Seminar Series Cambridge Centre for Risk StudiesOther talksCANCELLED IN SYMPATHY WITH STRIKE Nationality, Alienage and Early International Rights "Epigenetic studies in Alzheimer's disease" CANCELLED: The cognitive neuroscience of antidepressant drug action Uncertainty Quantification with Multi-Level and Multi-Index methods From dry to wet granular media |