🚨🧪 Announcing our #ICLR2026 Workshop, Generative AI in Genomics (Gen2): Barriers and Frontiers! @iclr-conf.bsky.social
📣Call for: Full workshop papers (5-8 pages) and Tiny papers (2-4 pages)
📅Submission deadline: 7 February 2026 AoE
🌐Learn more: genai-in-genomics.github.io
(1/7)
Really nice work!
Really interesting paper indeed.
🔥 WANTED: Student Researcher to join me, @vdebortoli.bsky.social, Jiaxin Shi, Kevin Li and @arthurgretton.bsky.social in DeepMind London.
You'll be working on Multimodal Diffusions for science. Apply here google.com/about/career...
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
Really nice.
Very excited to share our preprint: Self-Speculative Masked Diffusions
We speed up sampling of masked diffusion models by ~2x by using speculative sampling and a hybrid non-causal / causal transformer
arxiv.org/abs/2510.03929
w/ @vdebortoli.bsky.social, Jiaxin Shi, @arnauddoucet.bsky.social
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
Shunichi Amari has been awarded the 40th (2025) Kyoto Prize in recognition of his pioneering research in the fields of artificial neural networks, machine learning, and information geometry
www.riken.jp/pr/news/2025...
🌟Applications open- LOGML 2025🌟
👥Mentor-led projects, expert talks, tutorials, socials, and a networking night
✍️Application form: logml.ai
🔬Projects: www.logml.ai/projects.html
📅Apply by 6th April 2025
✉️Questions? logml.committee@gmail.com
#MachineLearning #SummerSchool #LOGML #Geometry
Just write a short informal email. If the person needs a long-winded polite email to answer, then perhaps you don't want to have to interact with them.
SuperDiff goes super big!
- Spotlight at #ICLR2025!🥳
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread👇
Why academia is sleepwalking into self-destruction. My editorial @brain1878.bsky.social If you agree with the sentiments please repost. It's important for all our sakes to stop the madness
academic.oup.com/brain/articl...
Excited to see our paper “Computing Nonequilibrium Responses with Score-Shifted Stochastic Differential Equations” in Physical Review Letters this morning as an Editor’s Suggestion! We uses ideas from generative modeling to unravel a rather technical problem. 🧵 journals.aps.org/prl/abstract...
Great intro to PAC-Bayes bounds. Highly recommended!
Well you can do it but we don't have any proof. We actually also ran alpha-DSBM with zero-variance noise, i.e. so really an "alpha-rectified flow": experimentally it does "work" but we have no proof of convergence for the procedure.
Yes the trajectories are not quite smooth as they correspond to a Brownian bridge and, as the variance of the reference Brownian motion of your SB goes to zero, you get back to the deterministic and straight paths of OT.
Better diffusions with scoring rules!
Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions.
arxiv.org/pdf/2502.02483
@vdebortoli.bsky.social Galashov Guntupalli Zhou @sirbayes.bsky.social @arnauddoucet.bsky.social
A standard ML approach for parameter estimation in latent variable models is to maximize the expectation of the logarithm of an importance sampling estimate of the intractable likelihood. We provide consistency/efficiency results for the resulting estimate: arxiv.org/abs/2501.08477
Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With @vdebortoli.bsky.social , A. Galashov & @arthurgretton.bsky.social , we extend this approach to (continuous-space) diffusion models: arxiv.org/abs/2501.05370
I personally read at least a couple of hours per day. It is not particularly focused and I might "waste" time but I just enjoy it.
Very nice paper indeed. I like it.
🔊 Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!
🔗 website: sites.google.com/view/fpiwork...
🔥 Call for papers: sites.google.com/view/fpiwork...
more details in thread below👇 🧵
Schrödinger Bridge Flow for Unpaired Data Translation (by @vdebortoli.bsky.social et al.)
It will take me some time to digest this article fully, but it's important to follow the authors' advice and read the appendices, as the examples are helpful and well-illustrated.
📄 arxiv.org/abs/2409.09347
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:
alexxthiery.github.io/jobs/2024_di...
I couldn't speak for the following 3 days :-)
I have updated my course notes on Optimal Transport with a new Chapter 9 on Wasserstein flows. It includes 3 illustrative applications: training a 2-layer MLP, deep transformers, and flow-matching generative models. You can access it here: mathematical-tours.github.io/book-sources...
exciting new work by my truly brilliant postdoc Eugenio Clerico on the optimality of coin-betting strategies for mean estimation!
for fans of: mean estimation, online learning with log loss, optimal portfolios, hypothesis testing with E-values, etc.
dig in:
arxiv.org/abs/2412.02640