Efficient computer interfaces using continuous gestures, language models, and speech
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Phil Cowans.
Despite advances in speech recognition technology, users of dictation systems still face a significant amount of work to correct errors made by the recognizer. The goal of this work is to investigate the use of a continuous gesture-based data entry interface to provide an efficient and fun way for users to correct recognition errors. Towards this goal, techniques are investigated which expand a recognizer’s results to help cover recognition errors. Additionally, models are developed which utilize a speech recognizer’s n-best list to build letter-based language models.
This talk is part of the Cavendish Astrophysics Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|