COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Wasserstein Embeddings in the Deep Learning Era
Wasserstein Embeddings in the Deep Learning EraAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact nobody. MDL - Mathematics of deep learning Computational optimal transport has found many applications in machine learning and, more specifically, deep learning as a fundamental tool to manipulate and compare probability distributions. The Wasserstein distances arising from the optimal transport problem have been of particular interest in recent years. However, a consistent roadblock against the more prevalent use of transport-based methods has been their computational cost. Besides the more well-known ideas for faster computational approaches, including entropy regularization, several fundamental concepts have emerged that enable the integration of transport-based methods as part of the computational graph of a deep neural network. Sliced-Wasserstein distances and the Linear Optimal Transport (LOT) framework are among fundamental concepts well suited for integration into today’s deep neural networks. In this talk, we will present the idea of Linear Optimal Transport (otherwise known as the Wasserstein Embedding) and its extension to Sliced-Wasserstein Embeddings and demonstrate their various applications in deep learning with a particular interest in learning from graphs and set-structured data. The talk will be an overview of our recent ICLR 2021 and NeurIPS 2021 publications. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsSurface, Microstructure and Fracture Talks second POLIS Staff and PhD Student ColloquiumOther talksSeminar cancelled Exploring the Cold Universe: From Obscured Active Galactic Nuclei, Interstellar to Circumgalactic Media at z ~ 5 - 7 Stochastic Gradient Descent, In Theory and Practice "Double-Headed Eagle: Russia between East and West" with Sir Tony Brenton Long-time prediction of nonlinear parametrized dynamical systems by deep learning-based ROMs |