COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Applied and Computational Analysis > Deep Dictionary Learning Approaches for Image Super-Resolution
Deep Dictionary Learning Approaches for Image Super-ResolutionAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Carola-Bibiane Schoenlieb. Single-image super-resolution refers to the problem of obtaining a high-resolution (HR) version of a single low-resolution (LR) image. This problem is highly ill-posed since it is possible to find many high-resolution images that can lead to the same low-resolution one. Current strategies to solve the single-image super-resolution problem are learning-based and the model that maps the LR image to the HR image is learned from external image datasets. Originally, learning-based approaches were built around the idea that both the LR and HR images admit a sparse representation in proper dictionaries and that the sparsity patterns of the two representations can be shared when the design of the two dictionaries is properly coupled. More recently, deep neural network (DNN) architectures have led to state of the art results. Inspired by the recent success of deep neural networks and the recent effort to develop multi-layer sparse models, we propose an approach based on deep dictionary learning. The proposed architecture contains several layers of analysis dictionaries to extract high-level features and one synthesis dictionary which is designed to optimize the reconstruction task. Each analysis dictionary contains two sub-dictionaries: an information preserving analysis dictionary (IPAD) and a clustering analysis dictionary (CAD). The IPAD with its corresponding thresholds passes the key information from the previous layer, while the CAD with its properly designed thresholds provides a sparse representation of input data that facilitates discrimination of key features. We then look at the multi-modal case and use the dictionary learning framework as a tool to model dependency across modality, to dictate the architecture of a deep neural network and to initialize the parameters of the network. Numerical results show that this approach leads to state-of-the-art results. This talk is part of the Applied and Computational Analysis series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsmas270 Cambridge Realist Workshop Entrepreneurship CentreOther talksLatent Variable Models for Text Generation Robust code development with MATLAB Learning and retaining tasks in redundant brain circuits Snipping out the stress: Variations in surgical technique for adrenalectomy with caudal vena cava venotomy in 19 dogs The Future of History: From Cliodynamics to Degenerative Dystopia, via Science Fiction – gloknos Annual Lecture |