University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Optimal sampling: from linear to nonlinear approximation

Optimal sampling: from linear to nonlinear approximation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

DREW01 - Multivariate approximation, discretization, and sampling recovery

Abstract: We consider the approximation of functions from point evaluations, using linear or nonlinear approximation tools. For linear approximation, recent results show that weighted least-squares projections allow to obtain quasi-optimal approximations in $L2$ (in expectation) with near to optimal sampling budget. This can be achieved by drawing i.i.d. samples from suitable distributions (depending on the linear approximation tool) and subsampling methods [1,2].In a first part of this talk, we review different strategies based on i.i.d. sampling and present alternative strategies based on repulsive point processes (or volume sampling) that allow to perform the same task with a reduced sampling complexity [3].In a second part, we show how these methods can be used to approximate functions with nonlinear approximation tools by coupling iterative algorithms on manifolds and optimal sampling methods for the (quasi-)projection onto successive linear spaces [4]. The proposed algorithm can be interpreted as a stochastic gradient method using optimal sampling, with provable convergence properties under classical convexity and smoothness assumptions. It can also be interpreted as a natural gradient descent on a manifold embedded in $L2$, which appears to be a Newton-type algorithm when written in terms of the coordinates of a parametrized manifold. In the case where we only have access to generating systems of successive linear spaces, iterative methods can be used to obtain an approximation of optimal distributions [5].Finally, we come back on linear approximation and present a new approach for obtaining quasi-optimal approximations for functions in reproducing kernel Hilbert spaces, using a kernel-based projection and volume sampling [6].  These are joint works with R. Gruhlke, B. Michel, and P. Trunschke References:[1] M. Sonnleitner and M. Ullrich. On the power of iid information for linear approximation. Journal of Applied and Numerical Analysis, 1(1):88–126, Dec. 2023.[2] C. Haberstich, A. Nouy, and G. Perrin. Boosted optimal weighted least-squares, Mathematics of Computation, 91(335) (2022), 1281–1315.[3] A. Nouy and B. Michel. Weighted least-squares approximation with determinantal point processes and generalized volume sampling. arXiv:2312.14057.[4] R. Gruhlke, A. Nouy, and P. Trunschke. Optimal sampling for stochastic and natural gradient descent.  arXiv:2402.03113.[5] P. Trunschke and A. Nouy. Optimal sampling for least squares approximation with general dictionaries, arXiv: 2407.07814.[6] P. Trunschke and A. Nouy. Almost-sure quasi-optimal approximation in reproducing kernel Hilbert spaces, arXiv: 2407.06674.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity