Martin Schrimpf's Avatar

Martin Schrimpf

@mschrimpf.bsky.social

NeuroAI Prof @EPFL πŸ‡¨πŸ‡­. ML + Neuro πŸ€–πŸ§ . Brain-Score, CORnet, Vision, Language. Previously: PhD @MIT, ML @Salesforce, Neuro @HarvardMed, & co-founder @Integreat. go.epfl.ch/NeuroAI

2,683 Followers  |  62 Following  |  55 Posts  |  Joined: 16.12.2023  |  2.2048

Latest posts by mschrimpf.bsky.social on Bluesky

A glimpse at what #NeuroAI brain models might enable: a topographic vision model predicts stimulation patterns that steer complex object recognition behavior in primates. This could be a key 'software' component for visual prosthetic hardware πŸ§ πŸ€–πŸ§ͺ

08.10.2025 11:11 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

🧠 New preprint: we show that model-guided microstimulation can steer monkey visual behavior.

Paper: arxiv.org/abs/2510.03684

🧡

07.10.2025 15:21 β€” πŸ‘ 13    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

Just to support Sam's argument here: there is indeed a lot of evidence across several domains such as vision and language that ML models develop representations similar to the human brain. There are of course many differences but on a certain level of abstraction there is a surprising convergence

05.10.2025 18:31 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Thank you!

02.10.2025 20:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

More precisely we would categorize it as a brain based disorder, but now I'm curious if you would be on board with that?

02.10.2025 18:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You're right and I apologize for the imprecise phrasing. I wanted to connect with the usual "brain in health and disease" phrasing, for which we developed some first tools based on the learning disorder dyslexia. We are hopeful that these tools will be applicable to diseases of brain function

02.10.2025 14:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Very happy to be part of this project: Melika Honarmand has done a great job of using vision-language-models to predict the behavior of people with dyslexia. A first step toward modeling various disease states using artificial neural networks.

02.10.2025 12:33 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We're super excited about this approach: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable much more broadly in a variety of other contexts!

02.10.2025 12:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.

02.10.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.

02.10.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks

02.10.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I've been arguing that #NeuroAI should model the brain in health *and* in disease -- very excited to share a first step from Melika Honarmand: inducing dyslexia in vision-language-models via targeted perturbations of visual-word-form units (analogous to human VWFA) πŸ§ πŸ€–πŸ§ͺ arxiv.org/abs/2509.24597

02.10.2025 12:10 β€” πŸ‘ 46    πŸ” 12    πŸ’¬ 1    πŸ“Œ 3

We're super excited about this approach more broadly: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable in a variety of other contexts!

02.10.2025 12:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.

02.10.2025 12:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.

02.10.2025 12:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks

02.10.2025 12:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Faculty Position in Neuroscience The School of Life Sciences at EPFL invites applications for a Tenure Track Assistant Professor position in Neuroscience. At EPFL researchers develop and apply innovative technologies to understand br...

Come be our colleague at EPFL! Several open calls for positions πŸ§ͺπŸ§ πŸ€–

* Neuroscience www.epfl.ch/about/workin... (deadline Oct 1)

* Life Science Engineering www.epfl.ch/about/workin...

* CS general call www.epfl.ch/about/workin...

* Learning Sciences www.epfl.ch/about/workin...

29.09.2025 11:46 β€” πŸ‘ 31    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

πŸ‘οΈπŸ§  New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...

27.09.2025 02:52 β€” πŸ‘ 90    πŸ” 24    πŸ’¬ 2    πŸ“Œ 5
Post image

EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...

02.09.2025 11:48 β€” πŸ‘ 53    πŸ” 26    πŸ’¬ 1    πŸ“Œ 6
Post image

Action potential πŸ‘‰ 3 faculty opportunities to join EPFL neuroscience 1. Tenure Track Assistant Professor in Neuroscience go.epfl.ch/neurofaculty, 2. Tenure Track Assistant Professor in Life Sciences Engineering, or 3. Associate Professor (tenured) in Life Sciences Engineering go.epfl.ch/LSEfaculty

18.08.2025 08:46 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.

Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.

Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social &‬ @brittawestner.bsky.social asks:

πŸ“Š What benchmarks are useful for cognitive science? πŸ’­
2025.ccneuro.org/gac

13.08.2025 07:00 β€” πŸ‘ 50    πŸ” 16    πŸ’¬ 1    πŸ“Œ 1

As part of #CCN2025 our satellite event on Monday will explore how we can model the brain as a physical system, from topography to biophysical detail -- and how such models can potentially lead to impactful applications neuroailab.github.io/modeling-the-physical-brain. Join us! πŸ§ͺ🧠 πŸ€–

08.08.2025 19:21 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

this is all to say that I think it is very cool the idea of "diverse representations driven by a unified objective" is coming to fruition, and I find the consistently high performance and alignment of powerful video models a strong support for it

01.08.2025 07:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

which enables a fine-grain mapping of cortical space with a new multi-task relevance analysis; the accurate (R~0.5) prediction of second-by-second human brain activity, which makes us more confident in the characterization of action understanding pathways; and a couple more

01.08.2025 07:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The mouse work is definitely relevant, will make sure to reference (apologies for the oversight). I do think there are substantial novelties that have only been made possible with more recent powerful video models: the tight relation to behavior and a variety of tasks,

01.08.2025 07:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I don't know what the policy is for parallel discussions on BlueSky and X so I'll post twice for now πŸ˜„

01.08.2025 07:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It was a steep climb in the "early days" (~2012) up the ImageNet gradient towards better visual system models. That tapped out ~2015 after resnet ...

But now w/ video models starting to perform, we can push forward again. Task-driven brain models ftw. amazing...

@mschrimpf.bsky.social

30.07.2025 15:03 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Great work by @davidtyt.bsky.social with @akgokce.bsky.social, Khaled Jedoui, @dyamins.bsky.social (and me).

Check out the full thread for more details bsky.app/profile/davi... and of course the paper biorxiv.org/content/10.1... #NeuroAI #Vision #Neuroscience #AI

30.07.2025 15:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Where models really shine is their ability to integrate disparate findings. Our findings not only recapitulate known brain structures, they also characterize action understanding pathways. The models help us make sense of hierarchy, behavioral relevance, and functional processing

30.07.2025 15:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Brain-like computations support object and motion recognition that map onto classic visual ventral and dorsal streams. But looking deeper, we found a much more distributed computational landscape -- which may emerge from a single computational goal: modeling the visual world

30.07.2025 15:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@mschrimpf is following 20 prominent accounts