CVPR@Paris opening speech at Sorbonne University by @davidpicard.bsky.social , @vickykalogeiton.bsky.social and Matthieu Cord.
Great location!
โค๏ธ
(also: free food as at 'real' CVPR)
@xiwang92.bsky.social
Ecole Polytechnique, IP Paris; Prev. Ph.D.@Univ Rennes, Inria/IRISA https://triocrossing.github.io/
CVPR@Paris opening speech at Sorbonne University by @davidpicard.bsky.social , @vickykalogeiton.bsky.social and Matthieu Cord.
Great location!
โค๏ธ
(also: free food as at 'real' CVPR)
For more details, visit the project website: yuanzhi-zhu.github.io/DiMO/
Or read the paper: arxiv.org/abs/2503.15457
The project is led by Yuanzhi Zhu (yuanzhi-zhu.github.io/about/) and supervised by @stephlat.bsky.social and @vickykalogeiton.bsky.social.
We test Di[M]O on image generation with MaskGit & Meissonic as teacher models.
- First one-step MDM that competes with multi-step teachers
- A significant speed-up of 8 to 32 times without degradation in quality.
- The first successful distillation approach for text-to-image MDMs.
Our approach fundamentally differs from previous distillation methods, such as DMD. Instead of minimizing the divergence of denoising distributions across the entire latent space, Di[M]O optimizes the divergence of token-level conditional distributions.
21.03.2025 15:35 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0To approximate the loss gradient, we introduce an auxiliary model that estimates an otherwise intractable term in the loss function. The auxiliary model is trained using a standard MDM training loss, with one-step generated samples as targets.
21.03.2025 15:35 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0To sample from the correct joint distribution, we introduce an initialization that maps a randomized input sequence to an almost deterministic target sequence.
Without proper initialization, the model may suffer from divergence or mode collapse, making this step essential.
The initial distribution is crucial here. As pointed out by
Jiaming Song, in his recent position paper arxiv.org/abs/2503.07154, multi-token prediction is inherently difficult due to the independence assumption between the predicted tokens.
The key idea is inspired by on-policy distillation. We align the output distributions of the teacher and student models at the student generated intermediate states, ensuring that the student's generation closely matches the teacher's by covering all possible intermediate states.
21.03.2025 15:35 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Masked Diffusion Models (MDMs) are a hot topic in generative AI ๐ฅ โ powerful but slow due to multiple sampling steps.
We @polytechniqueparis.bsky.social and @inria-grenoble.bsky.social introduce Di[M]O โ a novel approach to distill MDMs into a one-step generator without sacrificing quality.