University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Optimal score estimation via empirical Bayes smoothing

Optimal score estimation via empirical Bayes smoothing

Add to your list(s) Download to your calendar using vCal

  • UserAndre Wibisono (Yale University)
  • ClockMonday 15 July 2024, 16:00-17:00
  • HouseExternal.

If you have a question about this talk, please contact nobody.

DMLW01 - International workshop on diffusions in machine learning: foundations, generative models, and optimisation

We study the problem of estimating the score function of an unknown probability distribution $\rho$ from $n$ independent and identically distributed observations in $d$ dimensions. Assuming that $\rho$ is subgaussian and has a Lipschitz-continuous score function $s$, we establish the optimal rate of $\tilde \Theta(n{-\frac{2}{d+4}})$ for this estimation problem under the loss function $\|\hat s – s\|2_{L2(\rho*)}$ that is commonly used in the score matching literature, highlighting the curse of dimensionality where sample complexity for accurate score estimation grows exponentially with the dimension $d$. Leveraging key insights in empirical Bayes theory as well as a new convergence rate of smoothed empirical distribution in Hellinger distance, we show that a regularized score estimator based on a Gaussian kernel attains this rate, shown optimal by a matching minimax lower bound. We also discuss the implication of our theory on the sample complexity of score-based generative models. Joint work with Yihong Wu and Kaylee Yang.  

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity