👀 arxiv.org/abs/2510.25897
Thread with all details coming soon!
@lucasdegeorge.bsky.social
PhD student at École Polytechnique (Vista) and École des Ponts (IMAGINE) Working on conditional diffusion models
👀 arxiv.org/abs/2510.25897
Thread with all details coming soon!
Very proud of our recent work, kudos to the team! Read @davidpicard.bsky.social’s excellent post for more details or the paper arxiv.org/pdf/2502.21318
08.10.2025 21:19 — 👍 17 🔁 6 💬 0 📌 0Final note: I'm (we're) tempted to organize a challenge on that topic as a workshop at a CV conf. ImageNet is the only source of images allowed and then you compete to get the bold numbers.
Do you think there would be people in for that? Do you think it would make for a nice competition?
🚨Updated: "How far can we go with ImageNet for Text-to-Image generation?"
TL;DR: train a text2image model from scratch on ImageNet only and beat SDXL.
Paper, code, data available! Reproducible science FTW!
🧵👇
📜 arxiv.org/abs/2502.21318
💻 github.com/lucasdegeorg...
💽 huggingface.co/arijitghosh/...
I had the privilege to be invited to speak about our work "Around the World in 80 Timesteps" at the French Podcast Underscore! If you speak french, i highly recommend it they did a great job with the montage!
If you want to learn more nicolas-dufour.github.io/plonk
www.youtube.com/watch?v=s5oH...
If you want to listen to Nicolas (in French) talking about generative models for geolocation, it's right now: m.twitch.tv/micode
25.06.2025 18:25 — 👍 10 🔁 3 💬 1 📌 0With @arrijitghosh.bsky.social @nicolasdufour.bsky.social @davidpicard.bsky.social and @vickykalogeiton.bsky.social
05.03.2025 11:52 — 👍 0 🔁 0 💬 0 📌 0🛠️ Try it yourself:
- Access the models on Hugging Face: huggingface.co/Lucasdegeorge/CAD-I
- Train your own text-to-image models using our setup: github.com/lucasdegeorge/T2I-ImageNet
- Check out the project page: lucasdegeorge.github.io/projects/t2i...
🔍 Key findings: 
- Achieved +2 overall score over SD-XL on GenEval
- Achieved +5 performance on DPGBench 🏆
- Used only 1/10th of the model parameters
- Trained on 1/1000th of the typical training images
We used ImageNet with smart data-augmentations: 
- Detailed Recaptioning: Transforming limited captions into rich, context-aware captions that capture styles, backgrounds, and actions
- Composition: Using CutMix to create diverse concept combinations, expanding the dataset's learning potential.
Text-to-image models are trained with the "bigger is better" paradigm. 
But do we really need billions of images? 
No, if we are careful enough!
We trained text-to-image models using 1000 times less data in just 200 GPU hours, achieving good-quality images and strong performance on benchmarks
🚨 News! 🚨
We have released the models from our latest paper "How far can we go with ImageNet for text-to-image generation?" 
Check out the models on HuggingFace:
🤗 huggingface.co/Lucasdegeorg...
📜 arxiv.org/abs/2502.21318
Text-to-image models are trained on billions of data.
But, is it necessary? 
Our "How far can we go with ImageNet for T2I generation?" @lucasdegeorge.bsky.social @arrijitghosh.bsky.social @nicolasdufour.bsky.social @davidpicard.bsky.social shows that no, if we are careful arxiv.org/abs/2502.21318
Wow, neet! Reannotation is key here.
Conjecture:
As we are get more and more well-aligned text-image data, it will become easier and easier to train models.
This will allow us to explore both more streamlined and more exotic training recipes.
More signals that exciting times are coming!
These are some ridiculously good results from training tiny T2I models purely on ImageNet! It's almost too good to be true. Do check it out!
03.03.2025 10:46 — 👍 3 🔁 2 💬 0 📌 0🚨 New preprint!
How far can we go with ImageNet for Text-to-Image generation? w. @arrijitghosh.bsky.social @lucasdegeorge.bsky.social @nicolasdufour.bsky.social @vickykalogeiton.bsky.social 
TL;DR: Train a text-to-image model using 1000 less data in 200 GPU hrs!
📜https://arxiv.org/abs/2502.21318
🧵👇
🌍 Guessing where an image was taken is a hard, and often ambiguous problem. Introducing diffusion-based geolocation—we predict global locations by refining random guesses into trajectories across the Earth's surface! 
🗺️ Paper, code, and demo: nicolas-dufour.github.io/plonk