University of Cambridge > Talks.cam > Applied and Computational Analysis > Deep Dictionary Learning Approaches for Image Super-Resolution

Deep Dictionary Learning Approaches for Image Super-Resolution

Add to your list(s) Download to your calendar using vCal

  • UserPier Luigi Dragotti (Imperial College London)
  • ClockThursday 05 March 2020, 15:00-16:00
  • HouseMR 14.

If you have a question about this talk, please contact Carola-Bibiane Schoenlieb.

Single-image super-resolution refers to the problem of obtaining a high-resolution (HR) version of a single low-resolution (LR) image. This problem is highly ill-posed since it is possible to find many high-resolution images that can lead to the same low-resolution one.

Current strategies to solve the single-image super-resolution problem are learning-based and the model that maps the LR image to the HR image is learned from external image datasets.

Originally, learning-based approaches were built around the idea that both the LR and HR images admit a sparse representation in proper dictionaries and that the sparsity patterns of the two representations can be shared when the design of the two dictionaries is properly coupled. More recently, deep neural network (DNN) architectures have led to state of the art results.

Inspired by the recent success of deep neural networks and the recent effort to develop multi-layer sparse models, we propose an approach based on deep dictionary learning. The proposed architecture contains several layers of analysis dictionaries to extract high-level features and one synthesis dictionary which is designed to optimize the reconstruction task. Each analysis dictionary contains two sub-dictionaries: an information preserving analysis dictionary (IPAD) and a clustering analysis dictionary (CAD). The IPAD with its corresponding thresholds passes the key information from the previous layer, while the CAD with its properly designed thresholds provides a sparse representation of input data that facilitates discrimination of key features.

We then look at the multi-modal case and use the dictionary learning framework as a tool to model dependency across modality, to dictate the architecture of a deep neural network and to initialize the parameters of the network. Numerical results show that this approach leads to state-of-the-art results.

This talk is part of the Applied and Computational Analysis series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity