Iterative alpha-(de)blending
Table of Contents
1. Introduction
1.1. Samping a distribution
- The central problem is how to sample points from a distribution
- Say a single scalar from normal distribution:
- take a RV from uniform (or some other)
- map it through a function (inverse CDF if initial distribution is uniform)
- But we don't know the map always
- Page 7; Experiments with 1D densities
1.2. Learning the map
- So, lets learn the map
- But that is also difficult
1.3. Learn the noise to remove
- Diffusion!!
Gradually remove noise
\(D(x_\alpha, \alpha) \rightarrow\) Expection of (total noise)
- Page 4; Algorithm 4 Sampling
- See Code
2. How to Train
- \(\alpha \sim U[0, 1]\)
- \(x_\alpha\)
- \(x_0\) ~ Noise Distribution
- \(x_1\) ~ Target Distribution
- \(x_\alpha \rightarrow (1 - \alpha) \times x_0 + \alpha \times x_1\)
- \(D(x_\alpha, \alpha) \rightarrow (x_1 - x_0)\)
- Page 4: Algorithm 3 Training
- See Code
3. IADB
Deterministic Denoising Diffusion Model
- Deterministic
- Some good properties: can interpolate
- Denoising
- Diffusion
4. Why it works
- alpha blending
- alpha deblending
- Take a blended point and sample deblended noise and target point
- This is stochastic (Algorithm 1)
- If deblending is replaced by expectation then it will be deterministic (Algorithm 2)
- And in limit, they are same.
4.1. Just learn noise
- And we don't need to learn to expectations (\(\bar{x_0}\) and \(\bar{x_1}\))
- Just Learn noise
- Table 1
4.2. L2 Norm Learns Expectation
- Section 4.2
4.3. RK Integration
- Algorithm 5