Shoutout again to the amazing advisor team of
@gershbrain.bsky.social and @fierycushman.bsky.social!
Full paper: osf.io/preprints/ps...
@linasnasvytis.bsky.social
PhD @Stanford studying cognitive science & AI Prev: Pre-doc Fellow @Harvard, Econ & CS research with Paul Romer, Stats & ML @UniofOxford, Econ @Columbia
Shoutout again to the amazing advisor team of
@gershbrain.bsky.social and @fierycushman.bsky.social!
Full paper: osf.io/preprints/ps...
This has implications for AI and cognitive modeling:
When designing systems to reason socially, we shouldnβt assume full inference is always used β or always needed.
Humans strike a balance between accuracy and efficiency.
We model this in a Bayesian framework, comparing 3 hypotheses:
1. Full ToM: preference + belief (inferred from environment) β action
2. Correspondance bias: preference β action
3. Belief neglect: preference + environment (ignoring beliefs) β action
People flexibly switch depending on context!
With minimal training, participants started engaging in full joint inference over beliefs and preferences.
But without that training, belief neglect was common.
This suggests people adaptively allocate cognitive effort, depending on task structure.
Belief neglect is different from correspondence bias:
People DO account for environmental constraints (e.g., locked doors).
But they skip reasoning about what the agent believes about the environment.
Itβs a mid-level shortcut.
We find that, by default, people often neglect the agentβs beliefs.
They infer preferences as if the agentβs beliefs were correct β even when theyβre not.
This is what we call belief neglect.
In our task, participants watched agents navigate grid worlds to collect gems.
Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.
They then had to infer which gem the agents preferred.
The question we ask is: When do people actually engage in full ToM reasoning?
And when do they fall back on faster heuristics?
Theory of mind (ToM) β reasoning about othersβ beliefs and desires β is central to human intelligence.
It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.
But that kind of reasoning is computationally costly.
π¨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!
Humans are capable of sophisticated theory of mind, but when do we use it?
We formalize & document a new cognitive shortcut: belief neglect β inferring others' preferences, as if their beliefs are correctπ§΅