Daniel Wurgaft @NeurIPS's Avatar

Daniel Wurgaft @NeurIPS

@danielwurgaft.bsky.social

PhD @Stanford working w @noahdgoodman and research fellow @GoodfireAI Studying in-context learning and reasoning in humans and machines Prev. @UofT CS & Psych

136 Followers  |  203 Following  |  26 Posts  |  Joined: 03.04.2025  |  2.5779

Latest posts by danielwurgaft.bsky.social on Bluesky

Preview
Cracking the code of why, when some choose to β€˜self-handicap’ β€” Harvard Gazette New research also offers hints for devising ways to stop students from creating obstacles to success.

The Harvard Gazette has a nice story on my student @yangxiang.bsky.social and her work with @tobigerstenberg.bsky.social
news.harvard.edu/gazette/stor...

04.12.2025 19:43 β€” πŸ‘ 22    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
Aligning machine and human visual representations across abstraction levels - Nature Aligning foundation models with human judgments enables them to more accurately approximate human behaviour and uncertainty across various levels of visual abstraction, while additionally improving th...

What aspects of human knowledge do vision models like CLIP fail to capture, and how can we improve them? We suggest models miss key global organization; aligning them makes them more robust. Check out LukasMuttenthaler's work, finally out (in Nature!?) www.nature.com/articles/s41... + our blog! 1/3

12.11.2025 16:50 β€” πŸ‘ 79    πŸ” 18    πŸ’¬ 2    πŸ“Œ 1

In LLMs, concepts aren’t static: they evolve through time and have rich temporal dependencies.

We introduce Temporal Feature Analysis (TFA) to separate what's inferred from context vs. novel information. A big effort led by @ekdeepl.bsky.social, @sumedh-hindupur.bsky.social, @canrager.bsky.social!

14.11.2025 15:48 β€” πŸ‘ 20    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

Humans and LLMs think fast and slow. Do SAEs recover slow concepts in LLMs? Not really.

Our Temporal Feature Analyzer discovers contextual features in LLMs, that detect event boundaries, parse complex grammar, and represent ICL patterns.

13.11.2025 22:31 β€” πŸ‘ 18    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

Excited to have this out! This was a fun project that started with YingQiao and I discussing whether VLMs can do mental simulation of physics like people do, and it culminated in a new method where we prompted image generation models to simulate a series of images frame-by-frame.

13.11.2025 21:36 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨 NEW PREPRINT: Multimodal inference through mental simulation.

We examine how people figure out what happened by combining visual and auditory evidence through mental simulation.

Paper: osf.io/preprints/ps...
Code: github.com/cicl-stanfor...

16.09.2025 19:03 β€” πŸ‘ 52    πŸ” 15    πŸ’¬ 3    πŸ“Œ 1
Post image

🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect β€” inferring others' preferences, as if their beliefs are correct🧡

17.09.2025 00:58 β€” πŸ‘ 49    πŸ” 16    πŸ’¬ 2    πŸ“Œ 1
Flyer for the event!

Flyer for the event!

*Sharing for our department’s trainees*

🧠 Looking for insight on applying to PhD programs in psychology?

✨ Apply by Sep 25th to Stanford Psychology's 9th annual Paths to a Psychology PhD info-session/workshop to have all of your questions answered!

πŸ“ Application: tinyurl.com/pathstophd2025

02.09.2025 20:01 β€” πŸ‘ 10    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

In neuroscience, we often try to understand systems by analyzing their representations β€” using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:

05.08.2025 14:36 β€” πŸ‘ 167    πŸ” 53    πŸ’¬ 5    πŸ“Œ 0

Super excited to have the #InfoCog workshop this year at #CogSci2025! Join us in SF for an exciting lineup of speakers and panelists, and check out the workshop's website for more info and detailed scheduled
sites.google.com/view/infocog...

22.07.2025 19:18 β€” πŸ‘ 27    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2

Submit your latest and greatest papers to the hottest workshop on the block---on cognitive interpretability! πŸ”₯

16.07.2025 14:12 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Home First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)

Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! πŸ“£

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4

16.07.2025 13:08 β€” πŸ‘ 58    πŸ” 19    πŸ’¬ 1    πŸ“Œ 3

A bias for simplicity by itself does not guarantee good generalization (see the No Free Lunch Theorems). So an inductive bias is only good to the extent that it reflects structure in the data. Is the world simple? The success of deep nets (with their intrinsic Occam's razor) would suggest yes(?)

08.07.2025 13:57 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Hi thanks for the comment! I'm not too familiar with the robot-learning literature but would love to learn more about it!

01.07.2025 19:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Really nice analysis!

28.06.2025 08:03 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Thank you Andrew!! :)

28.06.2025 11:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

On a personal note, this is my first full-length first-author paper! @ekdeepl.bsky.social and I both worked so hard on this, and I am so excited about our results and the perspective we bring! Follow for more science of deep learning and human learning!

16/16

28.06.2025 02:35 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
In-Context Learning Strategies Emerge Rationally Recent work analyzing in-context learning (ICL) has identified a broad set of strategies that describe model behavior in different experimental conditions. We aim to unify these findings by asking why...

Thank you to amazing collaborators!
@ekdeepl.bsky.social @corefpark.bsky.social @gautamreddy.bsky.social @hidenori8tanaka.bsky.social @noahdgoodman.bsky.social
See the paper for full results and discussion! And watch for updates! We are working on explaining and unifying more ICL phenomena! 15/

28.06.2025 02:35 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’‘Key takeaways:
3) A top-down, normative perspective offers a powerful, predictive approach for understanding neural networks, complementing bottom-up mechanistic work.

14/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’‘Key takeaways:
2) A tradeoff between *loss and complexity* is fundamental to understanding model training dynamics, and gives a unifying explanation for ICL phenomena of transient generalization and task-diversity effects!

13/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’‘Key takeaways:
1) Is ICL Bayes-optimal? We argue the better question is *under what assumptions*. Cautiously, we conclude that ICL can be seen as approx. Bayesian under a simplicity bias and sublinear sample efficiency (though see our appendix for an interesting deviation!)

12/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Ablations of our analytical expression show the modeled computational constraints, in their assumed functional forms, are crucial!

11/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

And reveals some interesting findings: MLP width increases memorization, which is captured by our model as a reduced simplicity bias!

10/

28.06.2025 02:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our framework also makes novel Predictions:
πŸ”Ή**Sub-linear** sample efficiency β†’ sigmoidal transition from generalization to memorization
πŸ”Ή**Rapid** behavior change near the M–G crossover boundary
πŸ”Ή**Superlinear** scaling of time to transience as data diversity increases

9/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Intuitively, what does this predictive account imply? A rational tradeoff between a strategy's loss and complexity!

πŸ”΅Early: A simplicity bias (prior) favors a less complex strategy (G)
πŸ”΄Late: reducing loss (likelihood) favors a better-fitting, but more complex strategy (M)

8/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Fitting the three free parameters of our expression, we see that across checkpoints from 11 different runs, we almost perfectly predict *next-token predictions* and the relative distance maps!

We now have a predictive model of task diversity effects and transience!

7/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

We assume two well-known facts about neural nets as computational constraints (scaling laws and simplicity bias). This allows writing a closed-form expression for the posterior odds!

6/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We model our learner as behaving optimally in a hypothesis space defined by the M / G predictorsβ€”this yields a *hierarchical Bayesian* view:

πŸ”ΉPretraining = updating posterior probability (preference) for strategies
πŸ”ΉInference = posterior-weighted average of strategies

5/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We now have a unifying language to describe what strategies a model transitions between.

Back to our question:*Why* do models switch ICL strategies?! Given M / G are *Bayes-optimal* for train / true distributions, we invoke the approach of rational analysis to answer this!

4/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

By computing the distance between a model’s outputs and these predictors, we show models transition between memorizing and generalizing predictors as experimental settings are varied! This yields a unifying view on known ICL phenomena of task diversity effects and transience!

3/

28.06.2025 02:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@danielwurgaft is following 20 prominent accounts