University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Denoising Levy Probabilistic Models

Denoising Levy Probabilistic Models

Add to your list(s) Download to your calendar using vCal

  • UserAlain Oliviero-Durmus (École Polytechnique)
  • ClockWednesday 03 July 2024, 10:30-12:00
  • HouseExternal.

If you have a question about this talk, please contact nobody.

DML - Diffusions in machine learning: Foundations, generative models and non-convex optimisation

Investigating the noise distribution beyond Gaussians for noise injection in diffusion generative models remains an open problem. The Gaussian case has been a large success experimentally and theoretically, admitting a unified stochastic differential equation (SDE) framework, encompassing score-based and denoising formulations. Recent studies have investigated the potential of heavy-tailed noise distributions to mitigate mode collapse and effectively manage datasets exhibiting class imbalance, heavy tails, or prominent outliers. Very recently, Yoon et al. (NeurIPS 2023), presented the Levy-Ito model (LIM) that directly extended the SDE -based framework, to a class of heavy-tailed SDEs, where the injected noise followed an -stable distribution—a rich class of heavy-tailed distributions. Despite its theoretical elegance and performance improvements, LIM relies on highly involved mathematical techniques, which may limit its accessibility and hinder its broader adoption and further development. In this study, we take a step back, and instead of starting from the SDE formulation, we extend the denoising diffusion probabilistic model (DDPM) by directly replacing the Gaussian noise with -stable noise. We show that, by using only elementary proof techniques, the proposed approach, denoising Levy probabilistic model (DLPM) algorithmically boils down to running vanilla DDPM with minor modifications, hence allowing the use of existing implementations with minimal changes. Remarkably, as opposed to the Gaussian case, DLPM and LIM yield different backward processes leading to distinct sampling algorithms. This fundamental difference translates favorably for the performance of DLPM in various aspects: our experiments show that DLPM achieves better coverage of the tails of the data distribution, better generation of unbalanced datasets, and improved computation times requiring significantly smaller number of backward steps.If time permits I will also discussScore diffusion models without early stopping: finite Fisher information is all you needDiffusion models are a new class of generative models that revolve around the estimation of the score function associated with a stochastic differential equation. Subsequent to its acquisition, the approximated score function is then harnessed to simulate the corresponding time-reversal process, ultimately enabling the generation of approximate data samples. Despite their evident practical significance these models carry, a notable challenge persists in the form of a lack of comprehensive quantitative results, especially in scenarios involving non-regular scores and estimators. In almost all reported bounds in Kullback Leibler (KL) divergence, it is assumed that either the score function or its approximation is Lipschitz uniformly in time. However, this condition is very restrictive in practice or appears to be difficult to establish.To circumvent this issue, previous works mainly focused on establishing convergence bounds in KL for an early stopped version of the diffusion model and a smoothed version of the data distribution, or assuming that the data distribution is supported on a compact manifold. These explorations have lead to interesting bounds in either Wasserstein or Fortet-Mourier metrics. However, the question remains about the relevance of such early-stopping procedure or compactness conditions. In particular, if there exist a natural and mild condition ensuring explicit and sharp convergence bounds in KL.In this article, we tackle the aforementioned limitations by focusing on score diffusion models with fixed step size stemming from the Ornstein-Ulhenbeck semigroup and its kinetic counterpart. Our study provides a rigorous analysis, yielding simple, improved and sharp convergence bounds in KL applicable to any data distribution with finite Fisher information with respect to the standard Gaussian distribution.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity