Many when the number of steps in the puzzle is in the thousands and any error leads to a wrong solution
12.06.2025 15:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@lucamb.bsky.social
Assistant professor in Machine Learning and Theoretical Neuroscience. Generative modeling and memory. Opinionated, often wrong.
Many when the number of steps in the puzzle is in the thousands and any error leads to a wrong solution
12.06.2025 15:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Have you ever asked your child to solve a simple puzzle in 60.000 easy steps?
11.06.2025 18:19 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Students using AI to write their reports is like me going to the gym and getting a robot to lift my weights
11.06.2025 17:09 โ ๐ 58 ๐ 16 ๐ฌ 2 ๐ 3Generative decisions in diffusion models can be detected locally as symmetry breaking in the energy and globally as peaks in the conditional entropy rate.
The both corresponds to a (local or global) suppression of the quadratic potential (Hessian trace).
๐ง โจHow do we rebuild our memories? In our new study, we show that hippocampal ripples kickstart a coordinated expansion of cortical activity that helps reconstruct past experiences.
We recorded iEEG from patients during memory retrieval... and found something really cool ๐(thread)
Why? You can just mute out politics and owner's antics and it becomes perfecly fine again
03.05.2025 09:24 โ ๐ 3 ๐ 0 ๐ฌ 4 ๐ 0In continuous generative diffusion, the conditional entropy rate is the constant term that separates the score matching and the denoising score matching loss
This can be directly interpreted as the information transfer (bit rate) from the state x_t and the final generation x_0.
Decisions during generative diffusion are analogous to phase transitions in physics. They can be identified as peaks in the conditional entropy rate curve!
30.04.2025 13:37 โ ๐ 10 ๐ 3 ๐ฌ 0 ๐ 0I'd put these on the NeuroAI vision board:
@tyrellturing.bsky.social's Deep learning framework
www.nature.com/articles/s41...
@tonyzador.bsky.social's Next-gen AI through neuroAI
www.nature.com/articles/s41...
@adriendoerig.bsky.social's Neuroconnectionist framework
www.nature.com/articles/s41...
Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!
#ML #SDE #Diffusion #GenAI ๐ค๐ง
Indeed. We are currently doing a lot of work on guidance, so we will likely try to use entropic time there as well soon
29.04.2025 15:03 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0The largest we have tried so far is EDM2 XL on 512 ImageNet. It works very well there!
We did not try with guidance so far
I am very happy to share our latest work on the information theory of generative diffusion:
"Entropic Time Schedulers for Generative Diffusion Models"
We find that the conditional entropy offers a natural data-dependent notion of time during generation
Link: arxiv.org/abs/2504.13612
Flow Matching in a nutshell.
27.11.2024 14:07 โ ๐ 52 ๐ 7 ๐ฌ 1 ๐ 1I will be at #NeurIPS2024 in Vancouver. Iโm looking for post-docs, and if you want to talk about post-doc opportunities, get in touch. ๐ค
Hereโs my current team at Aalto University: users.aalto.fi/~asolin/group/
Can language models transcend the limitations of training data?
We train LMs on a formal grammar, then prompt them OUTSIDE of this grammar. We find that LMs often extrapolate logical rules and apply them OOD, too. Proof of a useful inductive bias.
Check it out at NeurIPS:
nips.cc/virtual/2024...
Photograph of Johannes Margraph and Gรผnter Klambauer introducing the ELLIS ML4Molecules Workshop 2024 in Berlin at the Fritz-Haber Institute in Dahlem.
Excited to speak at the ELLIS ML4Molecules Workshop 2024 in Berlin!
moleculediscovery.github.io/workshop2024/
Can we please stop sharing posts that legitimate murder? Please.
06.12.2024 11:14 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Our team at Google DeepMind is hiring Student Researchers for 2025!
๐งโ๐ฌ Interested in understanding reasoning capabilities of neural networks from first principles?
๐งโ๐ Currently studying for a BS/MS/PhD?
๐งโ๐ป Have solid engineering and research skills?
๐ We want to hear from you! Details in thread.
On the left figure, it showcases the behavior of Hopfield models. Given a query (the initial point of energy descent), a Hopfield model will retrieve the closest memory (local minimum) to that query such that it minimizes the energy function. A perfect Hopfield model is able to store patterns in distinct minima (or buckets). In contrast, the right figure illustrates a bad Associative Memory system, where stored patterns share a distinctive bucket. This enables the creation of spurious patterns, which appear like mixture of stored patterns. Spurious patterns will have lower energy than the memories due to this overlapping.
Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.
05.12.2024 17:29 โ ๐ 40 ๐ 5 ๐ฌ 1 ๐ 1The naivete of these takes is always amusing
They could be equally applied to human beings, and they would work as well
There are indeed cases in which obtaining an SDE equivalence isn't straightforward
04.12.2024 11:10 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0I have always been saying that diffusion = flow matching.
Is it supposed to be some sort of news now??
However, flow matching theory doesn't provide much guidance on how to do stochastic sampling
It relies on the extra structure of diffusion
Disagree, religious literacy is important
03.12.2024 06:29 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! ๐ณ We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.
paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...
๐งต(1/8)
A common question nowadays: Which is better, diffusion or flow matching? ๐ค
Our answer: Theyโre two sides of the same coin. We wrote a blog post to show how diffusion models and Gaussian flow matching are equivalent. Thatโs great: It means you can use them interchangeably.
I'm still cautiously optimistic that we'll find a way to leverage Bayesian ideas in "Modern" AI without retrofitting. However, I'm very much an agnostic when it comes the philosophy of uncertainty (Bayes vs frequentist vs imprecise etc.)
30.11.2024 08:04 โ ๐ 13 ๐ 1 ๐ฌ 0 ๐ 0