COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Signal Processing and Communications Lab Seminars > Challenges and Opportunities in Computational Imaging and Sensing
Challenges and Opportunities in Computational Imaging and SensingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Prof. Ramji Venkataramanan. This talk has been canceled/deleted In many areas of science and engineering new signal acquisition methods allow unprecedented access to physical measurements and are challenging the way in which we do signal and image processing. Within this broad theme related to the interplay between sensing and processing, the main focus of this talk is on new sampling methodologies inspired by the advent of event-based video cameras and on solving selected inverse imaging problems in particular when multi-modal images are acquired. In the first part of the talk, we investigate biologically-inspired time-encoding sensing systems as an alternative method to classical sampling, and address the problem of reconstructing classes of sparse signals from time-based samples. Inspired by a new generation of event-based audio-visual sensing architectures, we consider a sampling mechanism based on first filtering the input, before obtaining the timing information using leaky integrate-and-fire architectures. We show that, in this context, sampling by timing is equivalent to non-uniform sampling, where the reconstruction of the input depends on the characteristics of the filter and on the density of the non-uniform samples. Leveraging specific properties of the proposed filters, we derive sufficient conditions and propose novel algorithms for perfect reconstruction from time-based samples of classes of sparse signals. We then highlight further avenues for research in the emerging area of event-based sensing and processing. We then move on to discuss the single-image super-resolution problem which refers to the problem of obtaining a high-resolution (HR) version of a single low-resolution (LR) image. We consider the multi-modal case where a scene is observed using different imaging modalities and when these modalities have different resolutions. In this context, we use dictionary learning and sparse representation framework as a tool to model dependency across modalities in order to dictate the architecture of deep neural networks and to initialize the parameters of these networks. Numerical results show that this approach leads to state-of-the-art results in multi-modal image super-resolution applications. If time permits will also present applications in the area of art investigation. This talk is part of the Signal Processing and Communications Lab Seminars series. This talk is included in these lists:This talk is not included in any other list Note that ex-directory lists are not shown. |
Other listsRequired lists for MLG CAPE Advanced Technology Lecture Series Lectureship in Innate ImmunityOther talksThe Power of Perspective - An Interactive Talk CANCELLED: Rethinking Medicine The role of sensory inputs in generating and sustaining cognitive maps CANCELLED - Can You Live Without Chocolate? CANCELLED Heavens and Earth: An Empirical Approach to Knowledge Across Cultures– gloknos Annual Lecture Series |