University of Cambridge > Talks.cam > NLIP Seminar Series > Some lessons learned in Multimodal Representations and Transfer

Some lessons learned in Multimodal Representations and Transfer

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Andrew Caines.

Recent approaches that use transfer learning have had interesting contributions in Visual Question Answering, Image Captioning among other applications. While models have shown interesting results, recent interest in understanding deep learning have exposed some of the glaring weaknesses of the models. In this talk I will discuss three of my recent research directions that investigate the transfer learning framework in the context of Vision to Language tasks. First, I will discuss whether the current approaches to Multimodal Language models are sufficient and discuss interesting results that indicate the distributional properties of the models. Second, I will discuss whether the training data for these models are sufficient for inferring the quality of the models. Lastly, I will quickly discuss a recent proposal that investigates a method to quantitatively investigate the performance. All three works are empirical analyses but at the same time interesting and thought provoking.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2018 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity