Great blog post on rotary position embeddings (RoPE) in more than one dimension, with interactive visualisations, a bunch of experimental results, and code!
28.07.2025 14:51 โ ๐ 18 ๐ 2 ๐ฌ 0 ๐ 0@sedielem.bsky.social
Blog: https://sander.ai/ ๐ฆ: https://x.com/sedielem Research Scientist at Google DeepMind (WaveNet, Imagen 3, Veo, ...). I tweet about deep learning (research + software), music, generative models (personal account).
Great blog post on rotary position embeddings (RoPE) in more than one dimension, with interactive visualisations, a bunch of experimental results, and code!
28.07.2025 14:51 โ ๐ 18 ๐ 2 ๐ฌ 0 ๐ 0... also very honoured and grateful to see my blog linked in the video description! ๐ฅน๐๐
26.07.2025 21:59 โ ๐ 9 ๐ 0 ๐ฌ 0 ๐ 0I blog and give talks to help build people's intuition for diffusion models. YouTubers like @3blue1brown.com and Welch Labs have been a huge inspiration: their ability to make complex ideas in maths and physics approachable is unmatched. Really great to see them tackle this topic!
26.07.2025 21:59 โ ๐ 30 ๐ 0 ๐ฌ 1 ๐ 0Everyone is welcome!
15.07.2025 21:39 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Hello #ICML2025๐, anyone up for a diffusion circle? We'll just sit down somewhere and talk shop.
๐Join us at 3PM on Thursday July 17. We'll meet here (see photo, near the west building's west entrance), and venture out from there to find a good spot to sit. Tell your friends!
Diffusion models have analytical solutions, but they involve sums over the entire training set, and they don't generalise at all. They are mainly useful to help us understand how practical diffusion models generalise.
Nice blog + code by Raymond Fan: rfangit.github.io/blog/2025/op...
Note also that getting this number slightly wrong isn't that big a deal. Even if you make it 100k instead of 10k, it's not going to change the granularity of the high frequencies that much because of the logarithmic frequency spacing.
24.06.2025 23:39 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0The frequencies are log-spaced, so historically, 10k was plenty to ensure that all positions can be uniquely distinguished. Nowadays of course sequences can be quite a bit longer.
24.06.2025 23:39 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Here's the third and final part of Slater Stich's "History of diffusion" interview series!
The other two interviewees' research played a pivotal role in the rise of diffusion models, whereas I just like to yap about them ๐ฌ this was a wonderful opportunity to do exactly that!
The ML for audio ๐ฃ๏ธ๐ต๐ workshop is back at ICML 2025 in Vancouver! It will take place on Saturday, July 19. Featuring invited talks from Dan Ellis, Albert Gu, James Betker, Laura Laurenti and Pratyusha Sharma.
Submission deadline: May 23 (Friday next week)
mlforaudioworkshop.github.io
I am very happy to share our latest work on the information theory of generative diffusion:
"Entropic Time Schedulers for Generative Diffusion Models"
We find that the conditional entropy offers a natural data-dependent notion of time during generation
Link: arxiv.org/abs/2504.13612
One weird trick for better diffusion models: concatenate some DINOv2 features to your latent channels!
Combining latents with PCA components extracted from DINOv2 features yields faster training and better samples. Also enables a new guidance strategy. Simple and effective!
New blog post: let's talk about latents!
sander.ai/2025/04/15/l...
Amazing interview with Yang Song, one of the key researchers we have to thank for diffusion models.
The most important lesson: be fearless! The community's view on score matching was quite pessimistic at the time, he went against the grain and made it work at scale!
www.youtube.com/watch?v=ud6z...
๐ฅIntroducing Gemini 2.5, our most intelligent model with impressive capabilities in advanced reasoning and coding.
Now integrating thinking capabilities, 2.5 Pro Experimental is our most performant Gemini model yet. Itโs #1 on the LM Arena leaderboard. ๐ฅ
We are hiring on the Generative Media team in London: boards.greenhouse.io/deepmind/job...
We work on Imagen, Veo, Lyria and all that good stuff. Come work with us! If you're interested, apply before Feb 28.
Great interview with @jascha.sohldickstein.com about diffusion models! This is the first in a series: similar interviews with Yang Song and yours truly will follow soon.
(One of these is not like the others -- both of them basically invented the field, and I occasionally write a blog post ๐ฅฒ)
Yes! Also listen to this and contemplate the universe: grumusic.bandcamp.com/album/cosmog...
28.01.2025 23:55 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0This is just a tiny fraction of what's available, check out the schedule for more: neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 6 ๐ 0 ๐ฌ 0 ๐ 010. Last but not least (๐), here's my own workshop talk about multimodal iterative refinement: the methodological tension between language and perceptual modalities, autoregression and diffusion, and how to bring these together ๐ธ neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 09. A great overview of various strategies for merging multiple models together by Colin Raffel ๐ชฟ neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 08. Ishan Misra gives a nice overview of Meta's Movie Gen model ๐ฝ๏ธ (I have some questions about the diffusion vs. flow matching comparison though๐) neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 07. More on test-time scaling from @tomgoldstein.bsky.social, using a different approach based on recurrence ๐ neurips.cc/virtual/2024... (some interesting comments on the link with diffusion models in the questions at the end!)
22.01.2025 21:06 โ ๐ 5 ๐ 0 ๐ฌ 2 ๐ 06. @polynoamial.bsky.social talks about scaling compute at inference time, and the trade-offs involved -- in language models, but also in other settings ๐งฎ neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 05. Sparse autoencoders were in vogue well over a decade ago, back when I was doing my PhD. They've recently been revived in the context of mechanistic interpretability of LLMs ๐ @neelnanda.bsky.social gives a nice overview: neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 04. Insights from @suryaganguli.bsky.social on creativity, generalisation and overfitting in diffusion models ๐จ neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 03. @eerosim.bsky.social provides an in-depth look at the geometry of the distribution of natural images ๐ผ๏ธ Extremely relevant to anyone trying to understand what diffusion models are really doing. neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 9 ๐ 0 ๐ฌ 1 ๐ 02. A great talk from Alexis Conneau demonstrating the various challenges involved in giving LLMs a voice: neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 01. @davidduvenaud.bsky.social gave an inspiring talk about using language models to learn to represent functions -- the kind of thing people like to use e.g. Gaussian processes for ๐ neurips.cc/virtual/2024...
22.01.2025 21:06 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0๐ขPSA: #NeurIPS2024 recordings are now publicly available!
The workshops always have tons of interesting things on at once, so the FOMO is real๐ตโ๐ซ Luckily it's all recorded, so I've been catching up on what I missed.
Thread below with some personal highlights๐งต