COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Churchill CompSci Talks > Using Neural Networks to Generate Image Captions
Using Neural Networks to Generate Image CaptionsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Matthew Ireland. Even since the start of artificial intelligence, computer vision has been an area of interest for researchers all around the globe. Studies going as far as the 1970s form the base of the algorithms that we use today. Automatically describing the contents of an image is a fundamental challenge in artificial intelligence that connects both computer vision and natural language processing. It has applications in many fields, for instance helping visual impaired people or even for education purposes. In this talk, I will describe the algorithms used in the open-source solution implemented in TensorFlow. I will also do a short demonstration of the library, using an application I developed. This talk is part of the Churchill CompSci Talks series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsYishu's list 'Love and Revolution' reading group Science@Darwin Dr Augustus Chee National Centre for Statistical Ecology (NCSE) Seminars Computer Laboratory Opera Group SeminarsOther talksHornby Model Railways Ramble through my greenhouse and Automation Embedding Musical Codes into an Interactive Piano Composition MicroRNAs as circulating biomarkers in cancer Positive definite kernels for deterministic and stochastic approximations of (invariant) functions Requirements in Application Development |