University of Cambridge > > NLIP Seminar Series > On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems

On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kris Cao.

The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning. In real-world applications, using explicit user feedback as the reward signal is often unreliable and costly to collect. This problem can be mitigated if the user’s intent is known in advance or data is available to pre-train a task success predictor off-line. In practice, neither of these apply for most real world applications. In this talk, a practical method to learn the dialogue system with the human user will be presented, whereby the dialogue policy is jointly trained alongside the reward model via active learning with a Gaussian process model. This Gaussian process operates on a continuous space dialogue representation generated in an unsupervised fashion using a recurrent neural network encoder-decoder. The experimental results demonstrate that the proposed framework is able to significantly reduce data annotation costs and mitigate noisy user feedback to achieve truly on-line policy learning.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity