Daniel Wurgaft's Avatar

Daniel Wurgaft

@danielwurgaft.bsky.social

PhD @Stanford working w Noah Goodman Studying in-context learning and reasoning in humans and machines Prev. @UofT CS & Psych

119 Followers  |  189 Following  |  26 Posts  |  Joined: 03.04.2025  |  2.1407

Latest posts by danielwurgaft.bsky.social on Bluesky

Post image

🚨 NEW PREPRINT: Multimodal inference through mental simulation.

We examine how people figure out what happened by combining visual and auditory evidence through mental simulation.

Paper: osf.io/preprints/ps...
Code: github.com/cicl-stanfor...

16.09.2025 19:03 β€” πŸ‘ 52    πŸ” 15    πŸ’¬ 3    πŸ“Œ 1
Post image

🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect β€” inferring others' preferences, as if their beliefs are correct🧡

17.09.2025 00:58 β€” πŸ‘ 49    πŸ” 16    πŸ’¬ 2    πŸ“Œ 1
Flyer for the event!

Flyer for the event!

*Sharing for our department’s trainees*

🧠 Looking for insight on applying to PhD programs in psychology?

✨ Apply by Sep 25th to Stanford Psychology's 9th annual Paths to a Psychology PhD info-session/workshop to have all of your questions answered!

πŸ“ Application: tinyurl.com/pathstophd2025

02.09.2025 20:01 β€” πŸ‘ 10    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

In neuroscience, we often try to understand systems by analyzing their representations β€” using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:

05.08.2025 14:36 β€” πŸ‘ 163    πŸ” 53    πŸ’¬ 5    πŸ“Œ 0

Super excited to have the #InfoCog workshop this year at #CogSci2025! Join us in SF for an exciting lineup of speakers and panelists, and check out the workshop's website for more info and detailed scheduled
sites.google.com/view/infocog...

22.07.2025 19:18 β€” πŸ‘ 26    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2

Submit your latest and greatest papers to the hottest workshop on the block---on cognitive interpretability! πŸ”₯

16.07.2025 14:12 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Home First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)

Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! πŸ“£

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4

16.07.2025 13:08 β€” πŸ‘ 58    πŸ” 19    πŸ’¬ 1    πŸ“Œ 3

A bias for simplicity by itself does not guarantee good generalization (see the No Free Lunch Theorems). So an inductive bias is only good to the extent that it reflects structure in the data. Is the world simple? The success of deep nets (with their intrinsic Occam's razor) would suggest yes(?)

08.07.2025 13:57 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Hi thanks for the comment! I'm not too familiar with the robot-learning literature but would love to learn more about it!

01.07.2025 19:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Really nice analysis!

28.06.2025 08:03 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Thank you Andrew!! :)

28.06.2025 11:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

On a personal note, this is my first full-length first-author paper! @ekdeepl.bsky.social and I both worked so hard on this, and I am so excited about our results and the perspective we bring! Follow for more science of deep learning and human learning!

16/16

28.06.2025 02:35 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
In-Context Learning Strategies Emerge Rationally Recent work analyzing in-context learning (ICL) has identified a broad set of strategies that describe model behavior in different experimental conditions. We aim to unify these findings by asking why...

Thank you to amazing collaborators!
@ekdeepl.bsky.social @corefpark.bsky.social @gautamreddy.bsky.social @hidenori8tanaka.bsky.social @noahdgoodman.bsky.social
See the paper for full results and discussion! And watch for updates! We are working on explaining and unifying more ICL phenomena! 15/

28.06.2025 02:35 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’‘Key takeaways:
3) A top-down, normative perspective offers a powerful, predictive approach for understanding neural networks, complementing bottom-up mechanistic work.

14/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’‘Key takeaways:
2) A tradeoff between *loss and complexity* is fundamental to understanding model training dynamics, and gives a unifying explanation for ICL phenomena of transient generalization and task-diversity effects!

13/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’‘Key takeaways:
1) Is ICL Bayes-optimal? We argue the better question is *under what assumptions*. Cautiously, we conclude that ICL can be seen as approx. Bayesian under a simplicity bias and sublinear sample efficiency (though see our appendix for an interesting deviation!)

12/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Ablations of our analytical expression show the modeled computational constraints, in their assumed functional forms, are crucial!

11/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

And reveals some interesting findings: MLP width increases memorization, which is captured by our model as a reduced simplicity bias!

10/

28.06.2025 02:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our framework also makes novel Predictions:
πŸ”Ή**Sub-linear** sample efficiency β†’ sigmoidal transition from generalization to memorization
πŸ”Ή**Rapid** behavior change near the M–G crossover boundary
πŸ”Ή**Superlinear** scaling of time to transience as data diversity increases

9/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Intuitively, what does this predictive account imply? A rational tradeoff between a strategy's loss and complexity!

πŸ”΅Early: A simplicity bias (prior) favors a less complex strategy (G)
πŸ”΄Late: reducing loss (likelihood) favors a better-fitting, but more complex strategy (M)

8/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Fitting the three free parameters of our expression, we see that across checkpoints from 11 different runs, we almost perfectly predict *next-token predictions* and the relative distance maps!

We now have a predictive model of task diversity effects and transience!

7/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

We assume two well-known facts about neural nets as computational constraints (scaling laws and simplicity bias). This allows writing a closed-form expression for the posterior odds!

6/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We model our learner as behaving optimally in a hypothesis space defined by the M / G predictorsβ€”this yields a *hierarchical Bayesian* view:

πŸ”ΉPretraining = updating posterior probability (preference) for strategies
πŸ”ΉInference = posterior-weighted average of strategies

5/

28.06.2025 02:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We now have a unifying language to describe what strategies a model transitions between.

Back to our question:*Why* do models switch ICL strategies?! Given M / G are *Bayes-optimal* for train / true distributions, we invoke the approach of rational analysis to answer this!

4/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

By computing the distance between a model’s outputs and these predictors, we show models transition between memorizing and generalizing predictors as experimental settings are varied! This yields a unifying view on known ICL phenomena of task diversity effects and transience!

3/

28.06.2025 02:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

We first define Bayesian predictors for ICL settings that involve learning a finite mixture of tasks:

πŸ”΄ Memorizing (M): discrete prior on seen tasks.
πŸ”΅ Generalizing (G): continuous prior matching the true task distribution.

These match known strategies from prior work!

2/

28.06.2025 02:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient?

Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧡

1/

28.06.2025 02:35 β€” πŸ‘ 47    πŸ” 7    πŸ’¬ 2    πŸ“Œ 2
Preview
Scaling up the think-aloud method The think-aloud method, where participants voice their thoughts as they solve a task, is a valuable source of rich data about human reasoning processes. Yet, it has declined in popularity in contempor...

Thank you to awesome collaborators: @benpry.bsky.social, @gandhikanishk.bsky.social, Cedegao Zhang, @joshtenenbaum.bsky.social, @noahdgoodman.bsky.social

Full paper here: arxiv.org/abs/2505.23931

Code and Data: github.com/benpry/think...

(8/8)

25.06.2025 05:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

This work serves as a proof of concept for scaling up analysis of verbal reports, realizing a vision for automated protocol analysis first proposed by Waterman & Newell back in 1971. We hope this inspires new research on human reasoning using the think-aloud method! (7/8)

25.06.2025 05:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also found that human search is highly structured. Using a Gini index to measure consistency, we saw that human reasoning clusters around specific multi-step sequences much more than a random agent, revealing shared, underlying strategies. (6/8)

25.06.2025 05:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@danielwurgaft is following 20 prominent accounts