COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Cambridge Analysts' Knowledge Exchange > Can neural networks always be trained? On the boundaries of deep learning
Can neural networks always be trained? On the boundaries of deep learningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact pat47. Deep learning has emerged as a competitive new tool in image reconstruction. However, recent results demonstrate such methods are typically highly unstable – tiny, almost undetectable perturbations cause severe artefacts in the reconstruction, a major concern in practice. This is paradoxical given the existence of stable state-of-the-art methods for these problems. Thus, approximation theoretical results non-constructively imply the existence of stable and accurate neural networks. Hence the fundamental question: Can we explicitly construct/train stable and accurate neural networks for image reconstruction? I will discuss two results in this direction. The first is a negative result, saying such constructions are in general impossible, even given access to the solutions of common optimisation algorithms such as basis pursuit. The second is a positive result, saying that under sparsity assumptions, such neural networks can be constructed. These neural networks are stable and theoretically competitive with state-of-the-art results from other methods. Numerical examples of competitive performance are also provided. This talk is part of the Cambridge Analysts' Knowledge Exchange series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCCI Conservation Seminars Cambridge Interdisciplinary Performance Network The Paykel LecturesOther talksBalancing the Grid through Distributed Control of Flexible Loads Epigenetic memory over geological timescales Clinical Treatment and Cancer Stem Cell Biology in Malignant Glioma Talk 1. Awareness in sight: Self and other appraisals of disability in acquired brain injury Talk 2. The regulation of intrusive autobiographical memories Nonsmooth Optimization put to good use in energy problems Two Trials: Law and Memory after Argentina’s Dictatorship |