Unsupervised Learning from Users' Error Correction in Speech Dictation
- đ¤ Speaker:
- đ Date & Time: Thursday 18 May 2006, 11:00 - 12:00
- đ Venue: Room 911, Rutherford Building, Cavendish Laboratory, Department of Physics
Abstract
http://www.cs.indiana.edu/~doyu/listenrain/LearnFromCorection-ICSLP2004.pdf
Abstract: We propose an approach to adapting automatic speech recognition systems used in dictation systems through unsupervised learning from users’ error correction. Three steps are involved in the adaptation: 1) infer whether the user is correcting a speech recognition error or simply editing the text, 2) infer what the most possible cause of the error is, and 3) adapt the system accordingly. To adapt the system effectively, we introduce an enhanced two-pass pronunciation learning algorithm that utilizes the output from both an n-gram phoneme recognizer and a Letter-to-Sound component. Our experiments show that we can obtain greater than 10% relative word error rate reduction using the approaches we proposed. Learning new words gives the largest performance gain while adapting pronunciations and using a cache language model also produce a small gain.
Series This talk is part of the Machine Learning Journal Club series.
Included in Lists
- Cambridge talks
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group Journal Clubs
- Inference Group Summary
- Interested Talks
- Machine Learning Journal Club
- Machine Learning Summary
- ML
- Quantum Matter Journal Club
- Room 911, Rutherford Building, Cavendish Laboratory, Department of Physics
- rp587
- TQS Journal Clubs
- yk373's list
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 18 May 2006, 11:00-12:00