From passive to interactive (multimodal) language learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Kris Cao.
The way humans learn the meaning of words is a fundamental question in
many different disciplines and, from a computational perspective, an
answer to this question could lead to important advances in artificial
intelligence. While the details of the learning process are still an
open question, what we do know is that humans make use of a very rich
perceptual input present in the communicative setups in which learning takes place.
In this talk, I will present our efforts on designing realistic multi-modal models of human word learning. I will start by introducing a model that assumes a purely passive learner existing in a non-communicative setup. I will then relax some of the learning assumptions and present a model that assumes communicative episodes. Finally, I will present our on-going work towards interactive learning between two agents.
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|