Another intern opening on our team, for a project Iβll be involved in (deadline soon!)
25.02.2025 02:05 β π 8 π 3 π¬ 0 π 0@preetumnakkiran.bsky.social
ML Research @ Apple. Understanding deep learning (generalization, calibration, diffusion, etc). preetum.nakkiran.org
Another intern opening on our team, for a project Iβll be involved in (deadline soon!)
25.02.2025 02:05 β π 8 π 3 π¬ 0 π 0Last month I co-taught a class on diffusion models at MIT during the IAP term: www.practical-diffusion.org
In the lectures, we first introduced diffusion models from a practitioner's perspective, showing how to build a simple but powerful implementation from the ground up (L1).
(1/4)
Our main results study when projective composition is achieved by linearly combining scores.
We prove it suffices for particular independence properties to hold in pixel-space. Importantly, some results extend to independence in feature-space... but new complexities also arise (see the paper!) 5/5
We formalize this idea with a definition called Projective Composition β based on projection functions that extract the βkey featuresβ for each distribution to be composed. 4/
11.02.2025 05:59 β π 6 π 0 π¬ 1 π 0What does it mean for composition to "work" in these diverse settings? We need to specify which aspects of each distribution we care aboutβ i.e. the βkey featuresβ that characterize a hat, dog, horse, or object-at-a-location. The "correct" composition should have all the features at once. 3/
11.02.2025 05:59 β π 2 π 0 π¬ 1 π 0Part of challenge is, we may want compositions to be OOD w.r.t. the distributions being composed. For example in this CLEVR experiment, we trained diffusion models on images of a *single* object conditioned on location, and composed them to generate images of *multiple* objects. 2/
11.02.2025 05:59 β π 2 π 0 π¬ 1 π 0Paperπ§΅ (cross-posted at X): When does composition of diffusion models "work"? Intuitively, the reason dog+hat works and dog+horse doesnβt has something to do with independence between the concepts being composed. The tricky part is to formalize exactly what this means. 1/
11.02.2025 05:59 β π 39 π 15 π¬ 2 π 2finally managed to sneak my dog into a paper: arxiv.org/abs/2502.04549
10.02.2025 05:03 β π 62 π 4 π¬ 1 π 1Credit to: x.com/sjforman/sta...
09.02.2025 04:54 β π 1 π 0 π¬ 0 π 0nice idea actually lol: βPeriodic cooking of eggsβ : www.nature.com/articles/s44...
09.02.2025 04:53 β π 7 π 2 π¬ 1 π 0Reminder of a great dictum in research, one of 3 drilled into us by my PhD supervisor: "Don't believe anything obtained only one way", for which the actionable dictum is "immediately do a 2nd independent test of something that looks interesting before in any way betting on it". Its a great activity!
05.02.2025 20:17 β π 12 π 3 π¬ 0 π 0Iβve been in major denial about how powerful LLMs are, mainly bc I know of no good reason for it to be true. I imagine this was how deep learning felt to theorists the first time around π¬
04.02.2025 17:25 β π 22 π 0 π¬ 0 π 0Last year, we funded 250 authors and other contributors to attend #ICLR2024 in Vienna as part of this program. If you or your organization want to directly support contributors this year, please get in touch! Hope to see you in Singapore at #ICLR2025!
21.01.2025 15:52 β π 37 π 14 π¬ 1 π 0Happy for you Peli!!
18.01.2025 23:32 β π 1 π 0 π¬ 0 π 0The thing about "AI progress is hitting a wall" is that AI progress (like most scientific research) is a maze, and the way you solve a maze is by constantly hitting walls and changing directions.
18.01.2025 15:45 β π 96 π 11 π¬ 4 π 0for example I never trust an experiment in a paper unless (a) I know the authors well or (b) Iβve reproduced the results myself
11.01.2025 22:43 β π 6 π 0 π¬ 0 π 0imo most academics are skeptical of papers? Itβs well-known that many accepted papers are overclaimed or just wrongβ thereβs only a few papers people really pay attention to despite the volume
11.01.2025 22:42 β π 8 π 0 π¬ 1 π 0Thrilled to share the latest work from our team at
@Apple
where we achieve interpretable and fine-grained control of LLMs and Diffusion models via Activation Transport π₯
π arxiv.org/abs/2410.23054
π οΈ github.com/apple/ml-act
0/9 π§΅
π’ My team at Meta (including Yaron Lipman and Ricky Chen) is hiring a postdoctoral researcher to help us build the next generation of flow, transport, and diffusion models! Please apply here and message me:
www.metacareers.com/jobs/1459691...
Giving a short talk at JMM soon, which might finally be the push I needed to learn Leanβ¦
03.01.2025 18:30 β π 1 π 0 π¬ 1 π 0This optimal denoiser has a closed-form for finite train sets, and notably does not reproduce its train set; it can sort of "compose consistent patches." Good exercise for reader: work out the details to explain Figure 3.
01.01.2025 02:46 β π 10 π 0 π¬ 0 π 0Just read this, neat paper! I really enjoyed Figure 3 illustrating the basic idea: Suppose you train a diffusion model where the denoiser is restricted to be "local" (each pixel i only depends on its 3x3 neighborhood N(i)). The optimal local denoiser for pixel i is E[ x_0[i] | x_t[ N(i) ] ]...cont
01.01.2025 02:46 β π 39 π 2 π¬ 1 π 0Neat, Iβll take a closer look! (I think I saw an earlier talk you gave on this as well)
31.12.2024 20:15 β π 1 π 0 π¬ 0 π 0LLMs dont have motives, goals or intents, and so they wont lie or deceive in order to obtain them. but they are fantastic at replicating human culture, and there, goals, intents and deceit abound. so yes, we should also care about such "behaviors" (outputs) in deployed systems.
26.12.2024 19:05 β π 62 π 10 π¬ 7 π 1One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:
alexxthiery.github.io/jobs/2024_di...
βShould you still get a PhD given o3β feels like a weird category error. Yes, obviously you should still have fun and learn things in a world with capable AI. What else are you going to do, sit around on your hands?
22.12.2024 04:29 β π 89 π 6 π¬ 5 π 1sites.google.com/view/m3l-202...
14.12.2024 06:57 β π 1 π 0 π¬ 0 π 0Catch our talk about CFG at the M3L workshop Saturday morning @ Neurips! Iβll also be at the morning poster session, happy to chat
14.12.2024 06:56 β π 14 π 0 π¬ 1 π 0Found slides by Ankur Moitra (presented at a TCS For All event) on "How to do theoretical research." Full of great advice!
My favourite: "Find the easiest problem you can't solve. The more embarrassing, the better!"
Slides: drive.google.com/file/d/15VaT...
TCS For all: sigact.org/tcsforall/
It's located near the west entrance to the west side of the conference center, on the first floor, in case that helps!
When a bunch of diffusers sit down and talk shop, their flow cannot be matchedπ
It's time for the #NeurIPS2024 diffusion circle!
πJoin us at 3PM on Friday December 13. We'll meet near this thing, and venture out from there and find a good spot to sit. Tell your friends!