University of Cambridge > Talks.cam > CMIH Hub seminar series > Multi-modal Image Processing: Data Models, Algorithms, and Applications

Multi-modal Image Processing: Data Models, Algorithms, and Applications

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rachel Furner.

Note unusual time and venue

Many real-world data processing problems often involve heterogeneous images associated with different imaging modalities. These images are often associated with the same phenomenon – sharing common attributes – so it is of interest to devise new mechanisms that can effectively leverage the availability of such multi-modal data in a number of data processing tasks.

This talk proposes a multi-modal image processing framework based on joint sparse representations induced by coupled dictionary learning. In particular, our framework can capture favorable structure similarities across different image modalities such as edges, corners, and other elementary primitives in a learned sparse transform domain, instead of the original pixel domain, allowing us to develop new multimodal image processing algorithms for a number of tasks.

Practical experiments with imaging data related to a number of applications – ranging from medical imaging, art investigation, and more – demonstrate that our framework can lead to notable benefits in relation to other state-of-the-art approaches, including deep learning algorithms.

This talk summarizes joint work with various collaborators including Ingrid Daubechies (Duke U), Yonina Eldar (Technion), Lior Weizmann (Technion), Nikos Deligiannis (VUB), Bruno Cornellis (VUB), Pingfan Song (UCL), Joao Mota (Heriot Watt U), Pier Luigi Dragotti (Imperial College London), Xin Deng (Imperial College London)

This talk is part of the CMIH Hub seminar series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity