Linas Nasvytis's Avatar

Linas Nasvytis

@linasnasvytis.bsky.social

PhD @Stanford studying cognitive science & AI Prev: Pre-doc Fellow @Harvard, Econ & CS research with Paul Romer, Stats & ML @UniofOxford, Econ @Columbia

41 Followers  |  31 Following  |  10 Posts  |  Joined: 16.09.2025  |  1.8781

Latest posts by linasnasvytis.bsky.social on Bluesky

OSF

Shoutout again to the amazing advisor team of
@gershbrain.bsky.social and @fierycushman.bsky.social!

Full paper: osf.io/preprints/ps...

17.09.2025 00:58 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This has implications for AI and cognitive modeling:

When designing systems to reason socially, we shouldn’t assume full inference is always used β€” or always needed.

Humans strike a balance between accuracy and efficiency.

17.09.2025 00:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We model this in a Bayesian framework, comparing 3 hypotheses:
1. Full ToM: preference + belief (inferred from environment) β†’ action
2. Correspondance bias: preference β†’ action
3. Belief neglect: preference + environment (ignoring beliefs) β†’ action

People flexibly switch depending on context!

17.09.2025 00:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

With minimal training, participants started engaging in full joint inference over beliefs and preferences.

But without that training, belief neglect was common.

This suggests people adaptively allocate cognitive effort, depending on task structure.

17.09.2025 00:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Belief neglect is different from correspondence bias:

People DO account for environmental constraints (e.g., locked doors).

But they skip reasoning about what the agent believes about the environment.

It’s a mid-level shortcut.

17.09.2025 00:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We find that, by default, people often neglect the agent’s beliefs.

They infer preferences as if the agent’s beliefs were correct β€” even when they’re not.

This is what we call belief neglect.

17.09.2025 00:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

In our task, participants watched agents navigate grid worlds to collect gems.

Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.

They then had to infer which gem the agents preferred.

17.09.2025 00:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The question we ask is: When do people actually engage in full ToM reasoning?

And when do they fall back on faster heuristics?

17.09.2025 00:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Theory of mind (ToM) β€” reasoning about others’ beliefs and desires β€” is central to human intelligence.

It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.

But that kind of reasoning is computationally costly.

17.09.2025 00:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect β€” inferring others' preferences, as if their beliefs are correct🧡

17.09.2025 00:58 β€” πŸ‘ 49    πŸ” 16    πŸ’¬ 2    πŸ“Œ 1

@linasnasvytis is following 20 prominent accounts