Abstract:
The Ermon-Song algorithm (also sometimes known as stable diffusion) introduced in their papers 2019 and 2021 is a basic approach to producing additional samples of a "complicated" probability distribution starting with samples from another, known distribution from which sampling is easier. Without claiming any expertise in the generative or training aspects of the model, we will discuss this algorithm in the context of PDE methods of speeding up sampling, as well as the convergence of a discrete version of the Ermon-Song algorithm when the distribution from which one needs to sample has singular support. The ingredients in the proof are all completely elementary but to the best of our knowledge some of them may be new.
This is a joint work with Ayya Alieva and Gautam Iyer.