University of Cambridge > > NLIP Seminar Series > Generating Natural-Language Video Descriptions using LSTM Recurrent Neural Networks

Generating Natural-Language Video Descriptions using LSTM Recurrent Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kris Cao.

We present a method for automatically generating English sentences describing short videos using deep neural networks. Specifically, we apply convolutional and Long Short-Term Memory (LSTM) recurrent networks to translate videos to English descriptions using an encoder/decoder framework. A sequence of image frames (represented using deep visual features) is first mapped to a vector encoding the full video, and then this encoding is mapped to a sequence of words. We have also explored how statistical linguistic knowledge mined from large text corpora, specifically LSTM language models and lexical embeddings, can improve the descriptions. Experimental evaluation on a corpus of short YouTube videos and movie clips annotated by Descriptive Video Service demonstrate the capabilities of the technique by comparing its output to human-generated descriptions.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity