Martin Schrimpf's Avatar

Martin Schrimpf

@mschrimpf.bsky.social

NeuroAI Prof @EPFL πŸ‡¨πŸ‡­. ML + Neuro πŸ€–πŸ§ . Brain-Score, CORnet, Vision, Language. Previously: PhD @MIT, ML @Salesforce, Neuro @HarvardMed, & co-founder @Integreat. go.epfl.ch/NeuroAI

2,769 Followers  |  63 Following  |  61 Posts  |  Joined: 16.12.2023  |  2.0958

Latest posts by mschrimpf.bsky.social on Bluesky

Looking forward to presenting at the #AAAI #NeuroAI workshop; including 3 projects that were just accepted to ICLR! arxiv.org/abs/2509.24597, arxiv.org/abs/2510.03684, arxiv.org/abs/2506.13331 πŸ§ͺπŸ§ πŸ€–

27.01.2026 06:24 β€” πŸ‘ 20    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸŽ‰ Re-Align is back for its 4th edition at ICLR 2026!

πŸ“£ We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields.

πŸ“ Tracks: Short (≀5p), Long (≀10p), Challenge (blog)

⏰ Deadline: Feb 5, 2026 for papers

πŸ”— representational-alignment.github.io/2026/

07.01.2026 16:27 β€” πŸ‘ 13    πŸ” 8    πŸ’¬ 1    πŸ“Œ 3

One week left to apply to the EPFL computer science PhD program www.epfl.ch/education/ph.... It's an amazing environment to do impactful research πŸ§ͺ (with unparalleled compute)! My NeuroAI group is hiring πŸ§ πŸ€–. Consider this review service by our fantastic PhD students: www.linkedin.com/posts/spnesh...

08.12.2025 14:20 β€” πŸ‘ 3    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
UCL NeuroAI Talk Series A series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

For our next UCL #NeuroAI online seminar, we are happy to welcome Dr Martin Schrimpf @mschrimpf.bsky.social (EPFL)

πŸ—“οΈWed 19 Nov 2025
⏰2-3pm GMT

Neuro -> AI and Back Again: Integrative Models of the Human Brain in Health and Disease

ℹ️ Details / registration: www.eventbrite.co.uk/e/ucl-neuroa...

17.11.2025 16:26 β€” πŸ‘ 9    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Thrilled to be among this fantastic cohort of AI2050 Fellows. This is a great recognition of the transformative potential of #NeuroAI and our lab’s work in this space πŸ§ͺπŸ§ πŸ€–. Many thanks to @schmidtsciences.bsky.social for the support!

06.11.2025 11:16 β€” πŸ‘ 28    πŸ” 2    πŸ’¬ 4    πŸ“Œ 0
Preview
How neuroscientists are using AI Eight researchers explain how they are using large language models to analyze the literature, brainstorm hypotheses and interact with complex datasets.

Researchers are using LLMs to analyze the literature, brainstorm hypotheses, build models and interact with complex datasets. Hear from @mschrimpf.bsky.social, @neurokim.bsky.social, @jeremymagland.bsky.social, @profdata.bsky.social and others.

#neuroskyence

www.thetransmitter.org/machine-lear...

04.11.2025 16:07 β€” πŸ‘ 26    πŸ” 9    πŸ’¬ 0    πŸ“Œ 2
Preview
EPFL AI Center and Swiss AI Initiative Postdoctoral Fellowships The 2nd call is now open with a deadline for submissions of 3 November (17.00 CET)!Applications are encouraged from researchers at the postdoctoral level with a keen interest in collaborative, interdi...

Applications open for postdoc fellowships at the @epfl-ai-center.bsky.social, deadline Nov 3. www.epfl.ch/research/fun.... Great opportunity to work with our world class faculty; the fellowship is set up for collaboration between multiple labs, including my #NeuroAI group πŸ€–πŸ§ πŸ§ͺ

14.10.2025 07:26 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

The new data by Fernandez, @mbeyeler.bsky.social, Liu et al will be great here to better map the neural effect of various stimulation patterns

09.10.2025 11:00 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

To expand on this: When we built our stimulation->neural predictor (www.biorxiv.org/content/10.1...), we didn't find much experimental data to constrain the model. The best we found was data from @markhisted.org and biophysical modeling by Kumaravelu et al.

09.10.2025 11:00 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A glimpse at what #NeuroAI brain models might enable: a topographic vision model predicts stimulation patterns that steer complex object recognition behavior in primates. This could be a key 'software' component for visual prosthetic hardware πŸ§ πŸ€–πŸ§ͺ

08.10.2025 11:11 β€” πŸ‘ 42    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0

🧠 New preprint: we show that model-guided microstimulation can steer monkey visual behavior.

Paper: arxiv.org/abs/2510.03684

🧡

07.10.2025 15:21 β€” πŸ‘ 35    πŸ” 16    πŸ’¬ 1    πŸ“Œ 2

Just to support Sam's argument here: there is indeed a lot of evidence across several domains such as vision and language that ML models develop representations similar to the human brain. There are of course many differences but on a certain level of abstraction there is a surprising convergence

05.10.2025 18:31 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

Thank you!

02.10.2025 20:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

More precisely we would categorize it as a brain based disorder, but now I'm curious if you would be on board with that?

02.10.2025 18:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You're right and I apologize for the imprecise phrasing. I wanted to connect with the usual "brain in health and disease" phrasing, for which we developed some first tools based on the learning disorder dyslexia. We are hopeful that these tools will be applicable to diseases of brain function

02.10.2025 14:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Very happy to be part of this project: Melika Honarmand has done a great job of using vision-language-models to predict the behavior of people with dyslexia. A first step toward modeling various disease states using artificial neural networks.

02.10.2025 12:33 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We're super excited about this approach: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable much more broadly in a variety of other contexts!

02.10.2025 12:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.

02.10.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.

02.10.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks

02.10.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I've been arguing that #NeuroAI should model the brain in health *and* in disease -- very excited to share a first step from Melika Honarmand: inducing dyslexia in vision-language-models via targeted perturbations of visual-word-form units (analogous to human VWFA) πŸ§ πŸ€–πŸ§ͺ arxiv.org/abs/2509.24597

02.10.2025 12:10 β€” πŸ‘ 49    πŸ” 12    πŸ’¬ 1    πŸ“Œ 3

We're super excited about this approach more broadly: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable in a variety of other contexts!

02.10.2025 12:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.

02.10.2025 12:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.

02.10.2025 12:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks

02.10.2025 12:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Faculty Position in Neuroscience The School of Life Sciences at EPFL invites applications for a Tenure Track Assistant Professor position in Neuroscience. At EPFL researchers develop and apply innovative technologies to understand br...

Come be our colleague at EPFL! Several open calls for positions πŸ§ͺπŸ§ πŸ€–

* Neuroscience www.epfl.ch/about/workin... (deadline Oct 1)

* Life Science Engineering www.epfl.ch/about/workin...

* CS general call www.epfl.ch/about/workin...

* Learning Sciences www.epfl.ch/about/workin...

29.09.2025 11:46 β€” πŸ‘ 31    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

πŸ‘οΈπŸ§  New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...

27.09.2025 02:52 β€” πŸ‘ 93    πŸ” 25    πŸ’¬ 2    πŸ“Œ 6
Post image

EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...

02.09.2025 11:48 β€” πŸ‘ 54    πŸ” 29    πŸ’¬ 1    πŸ“Œ 6
Post image

Action potential πŸ‘‰ 3 faculty opportunities to join EPFL neuroscience 1. Tenure Track Assistant Professor in Neuroscience go.epfl.ch/neurofaculty, 2. Tenure Track Assistant Professor in Life Sciences Engineering, or 3. Associate Professor (tenured) in Life Sciences Engineering go.epfl.ch/LSEfaculty

18.08.2025 08:46 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.

Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.

Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social &‬ @brittawestner.bsky.social asks:

πŸ“Š What benchmarks are useful for cognitive science? πŸ’­
2025.ccneuro.org/gac

13.08.2025 07:00 β€” πŸ‘ 50    πŸ” 16    πŸ’¬ 1    πŸ“Œ 1

@mschrimpf is following 20 prominent accounts