Uncertainty and Learning in Spoken Human-Computer Dialogue
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Dr Marcus Tomalin.
In any spoken dialogue with a computer both speech recognition and semantic processing errors cause significant decreases in performance. Recent work has suggested the Partially Observable Markov Decision Process (POMDP) as a method for overcoming these difficulties. The POMDP model is able to capture the uncertainty inherent in dialogue and also provides a mechanism for the system to adapt and learn what to say in which situation. While effective on small problems the POMDP approach has struggled to scale to real world dialogues. This talk introduces an approach based on the POMDP model which does scale. Bayesian Networks are used to implement efficient belief updates and special function approximation techniques with gradient based learning provide an effective learning algorithm. Simulations show that the proposed framework outperforms standard techniques whenever errors increase.
This talk is part of the Machine Intelligence Laboratory Speech Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|