RT Pramod's Avatar

RT Pramod

@rtpramod.bsky.social

Postdoc at MIT. Cognitive Neuroscience.

142 Followers  |  248 Following  |  14 Posts  |  Joined: 17.11.2024  |  2.2376

Latest posts by rtpramod.bsky.social on Bluesky

Post image

𝗔 π—‘π—˜π—¨π—₯π—’π—˜π—–π—’π—Ÿπ—’π—šπ—œπ—–π—”π—Ÿ π—£π—˜π—₯π—¦π—£π—˜π—–π—§π—œπ—©π—˜ 𝗒𝗑 π—§π—›π—˜ 𝗣π—₯π—˜π—™π—₯π—’π—‘π—§π—”π—Ÿ 𝗖𝗒π—₯π—§π—˜π—«
By Mars and Passingham
"Understanding anthropoid foraging challenges may thus contribute to our understanding of human cognition"
Going to the top of the reading list!
doi.org/10.1016/j.ne...
#neuroskyence

11.10.2025 16:31 β€” πŸ‘ 60    πŸ” 16    πŸ’¬ 4    πŸ“Œ 2
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

🚨Out in PNAS🚨
with @joshtenenbaum.bsky.social & @rebeccasaxe.bsky.social

Punishment, even when intended to teach norms and change minds for the good, may backfire.

Our computational cognitive model explains why!

Paper: tinyurl.com/yc7fs4x7
News: tinyurl.com/3h3446wu

🧡

08.08.2025 14:04 β€” πŸ‘ 66    πŸ” 28    πŸ’¬ 3    πŸ“Œ 1
Things and Stuff: How the brain distinguishes oozing fluids from solid objects
YouTube video by McGovern Institute Things and Stuff: How the brain distinguishes oozing fluids from solid objects

Super excited to share our new article: β€œDissociable cortical regions represent things and stuff in the human brain” with @nancykanwisher.bsky.social, @rtpramod.bsky.social and @joshtenenbaum.bsky.social

Video abstract: www.youtube.com/watch?v=B0XR...

Paper: authors.elsevier.com/a/1lWxv3QW8S...

01.08.2025 13:50 β€” πŸ‘ 33    πŸ” 12    πŸ’¬ 1    πŸ“Œ 2
Preview
Evidence from Formal Logical Reasoning Reveals that the Language of Thought is not Natural Language Humans are endowed with a powerful capacity for both inductive and deductive logical thought: we easily form generalizations based on a few examples and draw conclusions from known premises. Humans al...

Is the Language of Thought == Language? A Thread 🧡
New Preprint (link: tinyurl.com/LangLOT) with @alexanderfung.bsky.social, Paris Jaggers, Jason Chen, Josh Rule, Yael Benn, @joshtenenbaum.bsky.social, β€ͺ@spiantado.bsky.social‬, Rosemary Varley, @evfedorenko.bsky.social
1/8

03.08.2025 20:18 β€” πŸ‘ 70    πŸ” 29    πŸ’¬ 5    πŸ“Œ 4
Post image

Can you tell if a tower will fall or if two objects will collide β€” just by looking? πŸ§ πŸ‘€ Come check out my #CogSci2025β€ͺ poster (P1-W-207) on July 31, 13:00–14:15 PT to learn how people do general-purpose physical reasoning from visual input!

29.07.2025 23:15 β€” πŸ‘ 13    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Good question! We haven't tested these cases you've mentioned but Jason Fischer's 2016 paper found that PN doesn't respond strongly to social prediction (on hieder and simmel-like displays)

19.06.2025 13:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We have started to look in the cerebellum. It is still early days so keep an eye out for updates in the future!

19.06.2025 13:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Decoding predicted future states from the brain’s β€œphysics engine” Using fMRI in humans, this study provides evidence for future state prediction in brain regions involved in physical reasoning.

Thrilled to announce our new publication titled 'Decoding predicted future states from the brain's physics engine' with @emiecz.bsky.social, Cyn X. Fang, @nancykanwisher.bsky.social, @joshtenenbaum.bsky.social

www.science.org/doi/full/10....

(1/n)

17.06.2025 18:23 β€” πŸ‘ 48    πŸ” 19    πŸ’¬ 1    πŸ“Œ 2

Thanks to my co-authors and all the people who gave constructive feedback over the course of this project! Special shout out to Kris Brewer for shooting the videos used in Experiment 1 and @georginawooxy.bsky.social for her deep neural network expertise.

(12/12)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Our findings show that PN has abstract object contact information and provide the strongest evidence yet that PN is engaged in predicting what will happen next. These results open many new avenues of investigation into how we understand, predict, and plan in the physical world

(11/n)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our main results are i) not present in the ventral temporal cortex, ii) not present in the primary visual cortex -- i.e, our stimuli were unlikely to have low-level visual confounds and iii) are replicable with different analysis criteria & methods. See paper for details.

(10/n)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Short answer: Yes! Using MVPA we found that the PN has information about predicted contact events (i.e., collisions). This was true not only within a scenario (the β€˜roll’ scene above), but also generalized across scenarios indicating the abstractness of representation.

(9/n)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

That is,
(8/n)

When we see this: Does the PN predict this?

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In our second pre-registered fMRI experiment, we tested the central tenet of the β€˜physics engine’ hypothesis – that the PN runs forward simulations to predict what will happen next. If true, PN should contain information about predicted future states before they occur.

(7/n)

17.06.2025 18:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Given their importance for prediction, we hypothesized that the PN would encode object contact. In our first pre-registered fMRI experiment, we used multi-voxel pattern analysis (MVPA) and found that only PN carried scenario-invariant information about object contact.

(6/n)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

If a container moves, then so does its containee, but the same is not true of an object that is merely occluded by the container without contacting it!

(5/n)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

However, there was no evidence for such predicted future state information in the PN. We realized that object-object contact is an excellent way to test the Physics Engine hypothesis. When two objects are in contact, their fate is intertwined:

(4/n)

17.06.2025 18:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These results have led to the hypothesis that the Physics Network (PN) is our brain’s β€˜Physics Engine’ – a generative model of the physical world (like those used in video games) capable of running simulations to predict what will happen next.

(3/n)

17.06.2025 18:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How do we understand, plan and predict in the physical world? Prior research has implicated fronto-parietal regions of the human brain (the β€˜Physics Network’, PN) in physical judgement tasks, including in carrying representations of object mass & physical stability.

(2/n)

17.06.2025 18:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Decoding predicted future states from the brain’s β€œphysics engine” Using fMRI in humans, this study provides evidence for future state prediction in brain regions involved in physical reasoning.

Thrilled to announce our new publication titled 'Decoding predicted future states from the brain's physics engine' with @emiecz.bsky.social, Cyn X. Fang, @nancykanwisher.bsky.social, @joshtenenbaum.bsky.social

www.science.org/doi/full/10....

(1/n)

17.06.2025 18:23 β€” πŸ‘ 48    πŸ” 19    πŸ’¬ 1    πŸ“Œ 2
Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.

Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.

I am excited to share our recent preprint and the last paper of my PhD! Here, @imelizabeth.bsky.social, @lisik.bsky.social, Mick Bonner, and I investigate the spatiotemporal hierarchy of social interactions in the lateral visual stream using EEG-fMRI.

osf.io/preprints/ps...

#CogSci #EEG

23.04.2025 15:34 β€” πŸ‘ 27    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Post image Video of a baby on its parent's chest looking at the parent's face and smiling.

Video of a baby on its parent's chest looking at the parent's face and smiling.

When you see this image, does it make you wonder what that baby is thinking. Do you think the baby is merely perceiving a set of shapes or do you think that the baby is also inferring meaning from the face they are looking at? (1/5)

22.04.2025 13:54 β€” πŸ‘ 30    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1
Preview
Sparse components distinguish visual pathways & their alignment to... The ventral, dorsal, and lateral streams in high-level human visual cortex are implicated in distinct functional processes. Yet, deep neural networks (DNNs) trained on a single task model the...

**ecstatic** to share our @iclr-conf.bsky.social paper: sparse components distinguish visual pathways & their alignment to neural networks, with @nancykanwisher.bsky.social and meenakshi khosla (openreview.net/forum?id=IqH...)

1/n

22.04.2025 20:35 β€” πŸ‘ 18    πŸ” 6    πŸ’¬ 1    πŸ“Œ 2
Preview
Visual homogeneity computations in the brain enable solving property-based visual tasks Seemingly disparate property-based tasks (oddball search, same-different and symmetry) are solved by computing a novel image property, visual homogeneity, which is localized to the object selective co...

In a study now out in @eLife, @GeorginJacob @PramodRT9 and I have some exciting results: a novel computation that helps the brain solve disparate visual tasks, a novel brain region that performs this computation....what's not to like?! Read on.... 1/n
elifesciences.org/articles/93033

18.04.2025 19:30 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Academics - where are academic jobs posted for non-UK non-North American countries? If you were looking for jobs in, say, the Nordic countries, or Australia, where do you look? Asking for all the PhDs who are on the market this year. (Pls no April fools jokes, their nerves are frayed as it is)

01.04.2025 21:15 β€” πŸ‘ 90    πŸ” 22    πŸ’¬ 17    πŸ“Œ 2
Preview
Technical Associate I, Kanwisher Lab MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139

I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......

26.03.2025 15:09 β€” πŸ‘ 64    πŸ” 48    πŸ’¬ 5    πŸ“Œ 3
(same content as the table in the paper)

(same content as the table in the paper)

My commentary on the do's and don'ts of cognitive evaluations in LLMs is now out in Nature Human Behavior:

doi.org/10.1038/s415...

posting here with a figure that didn't make it into the final draft and is now instead a boring table :P

#CogSci #LLMs #AI

16.01.2025 22:26 β€” πŸ‘ 64    πŸ” 14    πŸ’¬ 0    πŸ“Œ 0

Come and work with us and do a PhD with on a very exciting project #neurojobs

15.01.2025 09:34 β€” πŸ‘ 3    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
a man in a sweater is dancing in front of a door with a christmas wreath . ALT: a man in a sweater is dancing in front of a door with a christmas wreath .

My email is now closed until 2025. It’s been a tremendous year and I really could not wish for more. I got more this year than I ever dreamed off. Great collaborations (Sight team & @rtpramod.bsky.social & @nancykanwisher.bsky.social), students and researchers, cool papers, new data & funding.

20.12.2024 20:27 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@rtpramod is following 20 prominent accounts