Sampling with diffusion models
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact .
Teams link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk.
In this talk, Shreyas and Jiajun will discuss sampling with diffusion models. We cover two cases, one is to do posterior (or conditioning) sampling of diffusion models, for applications in inverse imaging, class-conditional sampling, text-to-image guidance and finetuning, and another is to learn a diffusion models to draw samples from unnormalized density.
For the former, we cover inference-only corrections to existing diffusion models that fall under the umbrella of “reconstruction guidance” (DPS, Red-diff etc), as well as training methods such as classifier and classifier-free guidance. Finally, we discuss some recent work for efficient finetuning (ControlNet, DEFT etc), as well as an introduction to stochastic control techniques (DEFT, Adjoint Matching).
For the latter, we will introduce some recently developed diffusion-based neural samplers, including diffusion denoising samplers (DDS, iDEM, etc) , escorted samplers (CMCD, etc.), or other variations.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|