|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Uncertainty and Learning in Spoken Human-Computer Dialogue
If you have a question about this talk, please contact Dr Marcus Tomalin.
In any spoken dialogue with a computer both speech recognition and semantic processing errors cause significant decreases in performance. Recent work has suggested the Partially Observable Markov Decision Process (POMDP) as a method for overcoming these difficulties. The POMDP model is able to capture the uncertainty inherent in dialogue and also provides a mechanism for the system to adapt and learn what to say in which situation. While effective on small problems the POMDP approach has struggled to scale to real world dialogues. This talk introduces an approach based on the POMDP model which does scale. Bayesian Networks are used to implement efficient belief updates and special function approximation techniques with gradient based learning provide an effective learning algorithm. Simulations show that the proposed framework outperforms standard techniques whenever errors increase.
This talk is part of the Machine Intelligence Laboratory Speech Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsDevBio The Forensic Use of Bioinformation BioLunch
Other talksRegulating T lymphocyte metabolism Lithops and other mesembs. CPP Seminar: Aspirations and the Capability Approach: Implications for Public Policy in Education and Health Sets of integers with no large sum-free subset Consequences of melanoma heterogeneity and plasticity for T-cell directed immunotherapy: Lessons from mouse models Seminar by the recipients of the Lister Institute Research Prize 2012