University of Cambridge > Talks.cam > CUED Control Group Seminars > Autonomous learning of multimodal internal models for robots using multiple sources of information

Autonomous learning of multimodal internal models for robots using multiple sources of information

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Alberto Padoan.

Robots can learn new skills by autonomously acquiring internal models to use for action prediction, planning and control. Humans are a successful example of autonomous learning: internal models are developed through a learning process, which starts in the first months of infants’ life, based on experience and exploration. Similarly, through autonomous exploration a robot can bootstrap internal models of its own sensorimotor system that enable it to predict the consequences of its actions (forward models) or the production of new actions to reach target states (inverse models). The use of multiple sources of information can benefit such autonomous learning process. I will first introduce an ensemble learning method that combines multiple prediction models to build forward models. Then, I will illustrate how the use of multiple sensory modalities (e.g. vision, touch, proprioception) plays a fundamental role in learning and performing multimodal tasks (such as playing a piano keyboard). Finally, I will present a multimodal deep variational auto-encoder architecture that allows a humanoid iCub robot to predict and imitate other agents’ actions, based only on its own learned internal model.

This talk is part of the CUED Control Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity