University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Diffeomorphism-based feature learning using Poincaré inequalities

Diffeomorphism-based feature learning using Poincaré inequalities

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCLW01 - Uncertainty in multivariate, non-Euclidean, and functional spaces: theory and practice

We propose a gradient-enhanced algorithm for high-dimensional scalar or vectorial function approximation. The algorithm proceeds in two steps: firstly, we reduce the input dimension by learning the relevant input features from gradient evaluations, and secondly, we regress the function output against the pre-learned features. To ensure theoretical guarantees, we construct the feature map as the first components of a diffeomorphism, which we learn by minimizing an error bound obtained using Poincaré Inequality applied either in the input space or in the feature space. This leads to two different strategies, which we compare both theoretically and numerically and relate to existing methods in the literature. In addition, we propose a dimension augmentation trick to increase the approximation power of feature detection. In practice, we construct the diffeomorphism using coupling flows, a particular class of invertible neural networks. Numerical experiments on various high-dimensional functions show that the proposed algorithm outperforms state-of-the-art competitors, especially with small datasets.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity