The Harvard Gazette has a nice story on my student @yangxiang.bsky.social and her work with @tobigerstenberg.bsky.social
news.harvard.edu/gazette/stor...
@danielwurgaft.bsky.social
PhD @Stanford working w @noahdgoodman and research fellow @GoodfireAI Studying in-context learning and reasoning in humans and machines Prev. @UofT CS & Psych
The Harvard Gazette has a nice story on my student @yangxiang.bsky.social and her work with @tobigerstenberg.bsky.social
news.harvard.edu/gazette/stor...
What aspects of human knowledge do vision models like CLIP fail to capture, and how can we improve them? We suggest models miss key global organization; aligning them makes them more robust. Check out LukasMuttenthaler's work, finally out (in Nature!?) www.nature.com/articles/s41... + our blog! 1/3
12.11.2025 16:50 β π 79 π 18 π¬ 2 π 1In LLMs, concepts arenβt static: they evolve through time and have rich temporal dependencies.
We introduce Temporal Feature Analysis (TFA) to separate what's inferred from context vs. novel information. A big effort led by @ekdeepl.bsky.social, @sumedh-hindupur.bsky.social, @canrager.bsky.social!
Humans and LLMs think fast and slow. Do SAEs recover slow concepts in LLMs? Not really.
Our Temporal Feature Analyzer discovers contextual features in LLMs, that detect event boundaries, parse complex grammar, and represent ICL patterns.
Excited to have this out! This was a fun project that started with YingQiao and I discussing whether VLMs can do mental simulation of physics like people do, and it culminated in a new method where we prompted image generation models to simulate a series of images frame-by-frame.
13.11.2025 21:36 β π 7 π 2 π¬ 1 π 0π¨ NEW PREPRINT: Multimodal inference through mental simulation.
We examine how people figure out what happened by combining visual and auditory evidence through mental simulation.
Paper: osf.io/preprints/ps...
Code: github.com/cicl-stanfor...
π¨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!
Humans are capable of sophisticated theory of mind, but when do we use it?
We formalize & document a new cognitive shortcut: belief neglect β inferring others' preferences, as if their beliefs are correctπ§΅
Flyer for the event!
*Sharing for our departmentβs trainees*
π§ Looking for insight on applying to PhD programs in psychology?
β¨ Apply by Sep 25th to Stanford Psychology's 9th annual Paths to a Psychology PhD info-session/workshop to have all of your questions answered!
π Application: tinyurl.com/pathstophd2025
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices
In neuroscience, we often try to understand systems by analyzing their representations β using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
05.08.2025 14:36 β π 167 π 53 π¬ 5 π 0Super excited to have the #InfoCog workshop this year at #CogSci2025! Join us in SF for an exciting lineup of speakers and panelists, and check out the workshop's website for more info and detailed scheduled
sites.google.com/view/infocog...
Submit your latest and greatest papers to the hottest workshop on the block---on cognitive interpretability! π₯
16.07.2025 14:12 β π 7 π 1 π¬ 0 π 0Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! π£
How can we interpret the algorithms and representations underlying complex behavior in deep learning models?
π coginterp.github.io/neurips2025/
1/4
A bias for simplicity by itself does not guarantee good generalization (see the No Free Lunch Theorems). So an inductive bias is only good to the extent that it reflects structure in the data. Is the world simple? The success of deep nets (with their intrinsic Occam's razor) would suggest yes(?)
08.07.2025 13:57 β π 6 π 1 π¬ 2 π 0Hi thanks for the comment! I'm not too familiar with the robot-learning literature but would love to learn more about it!
01.07.2025 19:59 β π 0 π 0 π¬ 0 π 0Really nice analysis!
28.06.2025 08:03 β π 12 π 3 π¬ 1 π 0Thank you Andrew!! :)
28.06.2025 11:54 β π 1 π 0 π¬ 0 π 0On a personal note, this is my first full-length first-author paper! @ekdeepl.bsky.social and I both worked so hard on this, and I am so excited about our results and the perspective we bring! Follow for more science of deep learning and human learning!
16/16
Thank you to amazing collaborators!
@ekdeepl.bsky.social @corefpark.bsky.social @gautamreddy.bsky.social @hidenori8tanaka.bsky.social @noahdgoodman.bsky.social
See the paper for full results and discussion! And watch for updates! We are working on explaining and unifying more ICL phenomena! 15/
π‘Key takeaways:
3) A top-down, normative perspective offers a powerful, predictive approach for understanding neural networks, complementing bottom-up mechanistic work.
14/
π‘Key takeaways:
2) A tradeoff between *loss and complexity* is fundamental to understanding model training dynamics, and gives a unifying explanation for ICL phenomena of transient generalization and task-diversity effects!
13/
π‘Key takeaways:
1) Is ICL Bayes-optimal? We argue the better question is *under what assumptions*. Cautiously, we conclude that ICL can be seen as approx. Bayesian under a simplicity bias and sublinear sample efficiency (though see our appendix for an interesting deviation!)
12/
Ablations of our analytical expression show the modeled computational constraints, in their assumed functional forms, are crucial!
11/
And reveals some interesting findings: MLP width increases memorization, which is captured by our model as a reduced simplicity bias!
10/
Our framework also makes novel Predictions:
πΉ**Sub-linear** sample efficiency β sigmoidal transition from generalization to memorization
πΉ**Rapid** behavior change near the MβG crossover boundary
πΉ**Superlinear** scaling of time to transience as data diversity increases
9/
Intuitively, what does this predictive account imply? A rational tradeoff between a strategy's loss and complexity!
π΅Early: A simplicity bias (prior) favors a less complex strategy (G)
π΄Late: reducing loss (likelihood) favors a better-fitting, but more complex strategy (M)
8/
Fitting the three free parameters of our expression, we see that across checkpoints from 11 different runs, we almost perfectly predict *next-token predictions* and the relative distance maps!
We now have a predictive model of task diversity effects and transience!
7/
We assume two well-known facts about neural nets as computational constraints (scaling laws and simplicity bias). This allows writing a closed-form expression for the posterior odds!
6/
We model our learner as behaving optimally in a hypothesis space defined by the M / G predictorsβthis yields a *hierarchical Bayesian* view:
πΉPretraining = updating posterior probability (preference) for strategies
πΉInference = posterior-weighted average of strategies
5/
We now have a unifying language to describe what strategies a model transitions between.
Back to our question:*Why* do models switch ICL strategies?! Given M / G are *Bayes-optimal* for train / true distributions, we invoke the approach of rational analysis to answer this!
4/
By computing the distance between a modelβs outputs and these predictors, we show models transition between memorizing and generalizing predictors as experimental settings are varied! This yields a unifying view on known ICL phenomena of task diversity effects and transience!
3/