Really nice work!
04.12.2025 11:08 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0@arnauddoucet.bsky.social
Senior Staff Research Scientist @Google DeepMind, previously Stats Prof @Oxford Uni - interested in Computational Statistics, Generative Modeling, Monte Carlo methods, Optimal Transport.
Really nice work!
04.12.2025 11:08 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Really interesting paper indeed.
20.11.2025 10:15 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0๐ฅ WANTED: Student Researcher to join me, @vdebortoli.bsky.social, Jiaxin Shi, Kevin Li and @arthurgretton.bsky.social in DeepMind London.
You'll be working on Multimodal Diffusions for science. Apply here google.com/about/career...
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
10.11.2025 09:09 โ ๐ 24 ๐ 12 ๐ฌ 4 ๐ 2Really nice.
05.11.2025 09:44 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0Very excited to share our preprint: Self-Speculative Masked Diffusions
We speed up sampling of masked diffusion models by ~2x by using speculative sampling and a hybrid non-causal / causal transformer
arxiv.org/abs/2510.03929
w/ @vdebortoli.bsky.social, Jiaxin Shi, @arnauddoucet.bsky.social
(1/n)๐จTrain a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
๐ arxiv.org/abs/2506.01225
๐ป github.com/majhas/self-...
Shunichi Amari has been awarded the 40th (2025) Kyoto Prize in recognition of his pioneering research in the fields of artificial neural networks, machine learning, and information geometry
www.riken.jp/pr/news/2025...
๐Applications open- LOGML 2025๐
๐ฅMentor-led projects, expert talks, tutorials, socials, and a networking night
โ๏ธApplication form: logml.ai
๐ฌProjects: www.logml.ai/projects.html
๐
Apply by 6th April 2025
โ๏ธQuestions? logml.committee@gmail.com
#MachineLearning #SummerSchool #LOGML #Geometry
Just write a short informal email. If the person needs a long-winded polite email to answer, then perhaps you don't want to have to interact with them.
09.03.2025 13:42 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0SuperDiff goes super big!
- Spotlight at #ICLR2025!๐ฅณ
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread๐
Why academia is sleepwalking into self-destruction. My editorial @brain1878.bsky.social If you agree with the sentiments please repost. It's important for all our sakes to stop the madness
academic.oup.com/brain/articl...
Excited to see our paper โComputing Nonequilibrium Responses with Score-Shifted Stochastic Differential Equationsโ in Physical Review Letters this morning as an Editorโs Suggestion! We uses ideas from generative modeling to unravel a rather technical problem. ๐งต journals.aps.org/prl/abstract...
04.03.2025 18:45 โ ๐ 10 ๐ 2 ๐ฌ 1 ๐ 0Great intro to PAC-Bayes bounds. Highly recommended!
05.03.2025 09:55 โ ๐ 12 ๐ 2 ๐ฌ 0 ๐ 0Well you can do it but we don't have any proof. We actually also ran alpha-DSBM with zero-variance noise, i.e. so really an "alpha-rectified flow": experimentally it does "work" but we have no proof of convergence for the procedure.
08.02.2025 17:54 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Yes the trajectories are not quite smooth as they correspond to a Brownian bridge and, as the variance of the reference Brownian motion of your SB goes to zero, you get back to the deterministic and straight paths of OT.
08.02.2025 12:02 โ ๐ 4 ๐ 1 ๐ฌ 1 ๐ 0Better diffusions with scoring rules!
Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions.
arxiv.org/pdf/2502.02483
@vdebortoli.bsky.social Galashov Guntupalli Zhou @sirbayes.bsky.social @arnauddoucet.bsky.social
A standard ML approach for parameter estimation in latent variable models is to maximize the expectation of the logarithm of an importance sampling estimate of the intractable likelihood. We provide consistency/efficiency results for the resulting estimate: arxiv.org/abs/2501.08477
16.01.2025 17:43 โ ๐ 30 ๐ 2 ๐ฌ 0 ๐ 0Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With @vdebortoli.bsky.social , A. Galashov & @arthurgretton.bsky.social , we extend this approach to (continuous-space) diffusion models: arxiv.org/abs/2501.05370
10.01.2025 16:30 โ ๐ 45 ๐ 10 ๐ฌ 0 ๐ 0I personally read at least a couple of hours per day. It is not particularly focused and I might "waste" time but I just enjoy it.
08.01.2025 08:05 โ ๐ 14 ๐ 1 ๐ฌ 1 ๐ 1Very nice paper indeed. I like it.
27.12.2024 16:38 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0๐ Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!
๐ website: sites.google.com/view/fpiwork...
๐ฅ Call for papers: sites.google.com/view/fpiwork...
more details in thread below๐ ๐งต
Schrรถdinger Bridge Flow for Unpaired Data Translation (by @vdebortoli.bsky.social et al.)
It will take me some time to digest this article fully, but it's important to follow the authors' advice and read the appendices, as the examples are helpful and well-illustrated.
๐ arxiv.org/abs/2409.09347
The slides of my NeurIPS lecture "From Diffusion Models to Schrรถdinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:
alexxthiery.github.io/jobs/2024_di...
I couldn't speak for the following 3 days :-)
14.12.2024 22:13 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0I have updated my course notes on Optimal Transport with a new Chapter 9 on Wasserstein flows. It includes 3 illustrative applications: training a 2-layer MLP, deep transformers, and flow-matching generative models. You can access it here: mathematical-tours.github.io/book-sources...
04.12.2024 08:11 โ ๐ 103 ๐ 19 ๐ฌ 2 ๐ 0exciting new work by my truly brilliant postdoc Eugenio Clerico on the optimality of coin-betting strategies for mean estimation!
for fans of: mean estimation, online learning with log loss, optimal portfolios, hypothesis testing with E-values, etc.
dig in:
arxiv.org/abs/2412.02640