BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Will recurrent neural network language models scale? - Tony Robins
 on (Cantab Research)
DTSTART:20140307T120000Z
DTEND:20140307T130000Z
UID:TALK50998@talks.cam.ac.uk
CONTACT:Rogier van Dalen
DESCRIPTION:In “Up from trigrams! The struggle for improved language mod
 els” Fred Jelinek described the first use of trigrams in 1976 and then l
 amented “The surprising fact is that now\, a full 15 years later\, after
  all the solid progress in speech recognition\, the trigram model remains 
 fundamental”.  Almost two decades after this paper the situation was lar
 gely unchanged but in 2010 Tomas Mikolov presented the "Recurrent neural n
 etwork based language model” (RNN LM).  After many decades we now have a
  new means for language modelling which is clearly much better than the n-
 gram.  Having actively pioneered the use of RNNs in the 80's and 90's the 
 concern arises as to whether the RNNs will continue to outperform or wheth
 er there will be another “neural net winter”.  This talk address the p
 roblem of whether RNN LMs will scale by looking at the scaling properties 
 of n-grams\, and then doing the same for RNN LMs.  Scaling is considered i
 n terms of LM words\, number of parameters\, processing power and memory. 
   Preliminary results will be presented showing the largest reductions in 
 perplexity reported so far\, an analysis of the performance on frequent an
 d rate words\, results on the newly released 1-billion-word-language-model
 ling-benchmark and the impact on word error rates in a  a commercial LVCSR
  system.  The talk concludes by justifying whether RNN LMs will scale with
  respect to the previously incumbent n-grams.
LOCATION:Department of Engineering - LR6
END:VEVENT
END:VCALENDAR
