A glimpse at what #NeuroAI brain models might enable: a topographic vision model predicts stimulation patterns that steer complex object recognition behavior in primates. This could be a key 'software' component for visual prosthetic hardware π§ π€π§ͺ
08.10.2025 11:11 β π 11 π 2 π¬ 1 π 0
π§ New preprint: we show that model-guided microstimulation can steer monkey visual behavior.
Paper: arxiv.org/abs/2510.03684
π§΅
07.10.2025 15:21 β π 13 π 8 π¬ 1 π 2
Just to support Sam's argument here: there is indeed a lot of evidence across several domains such as vision and language that ML models develop representations similar to the human brain. There are of course many differences but on a certain level of abstraction there is a surprising convergence
05.10.2025 18:31 β π 3 π 1 π¬ 1 π 0
Thank you!
02.10.2025 20:12 β π 1 π 0 π¬ 1 π 0
More precisely we would categorize it as a brain based disorder, but now I'm curious if you would be on board with that?
02.10.2025 18:29 β π 0 π 0 π¬ 1 π 0
You're right and I apologize for the imprecise phrasing. I wanted to connect with the usual "brain in health and disease" phrasing, for which we developed some first tools based on the learning disorder dyslexia. We are hopeful that these tools will be applicable to diseases of brain function
02.10.2025 14:26 β π 0 π 0 π¬ 1 π 0
Very happy to be part of this project: Melika Honarmand has done a great job of using vision-language-models to predict the behavior of people with dyslexia. A first step toward modeling various disease states using artificial neural networks.
02.10.2025 12:33 β π 3 π 1 π¬ 0 π 0
We're super excited about this approach: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable much more broadly in a variety of other contexts!
02.10.2025 12:10 β π 1 π 0 π¬ 0 π 0
Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.
02.10.2025 12:10 β π 0 π 0 π¬ 1 π 0
It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.
02.10.2025 12:10 β π 0 π 0 π¬ 1 π 0
We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks
02.10.2025 12:10 β π 0 π 0 π¬ 1 π 0
I've been arguing that #NeuroAI should model the brain in health *and* in disease -- very excited to share a first step from Melika Honarmand: inducing dyslexia in vision-language-models via targeted perturbations of visual-word-form units (analogous to human VWFA) π§ π€π§ͺ arxiv.org/abs/2509.24597
02.10.2025 12:10 β π 46 π 12 π¬ 1 π 3
We're super excited about this approach more broadly: localizing model analogues of hypothesized neural causes in the brain and testing their downstream behavioral effects is applicable in a variety of other contexts!
02.10.2025 12:04 β π 0 π 0 π¬ 0 π 0
Digging deeper into the ablated model, we found that its behavioral patterns mirror phonological deficits of dyslexic humans, without a significant deficit in orthographic processing. This connects to experimental work suggesting that phonological and orthographic deficits have distinct origins.
02.10.2025 12:04 β π 1 π 0 π¬ 1 π 0
It turns out that the ablation of these units has a very specific effect: it reduced reading performance to dyslexia levels *but* keeps visual reasoning performance intact. This does not happen with random units, so localization is key.
02.10.2025 12:04 β π 0 π 0 π¬ 1 π 0
We achieve this via the localization and subsequent ablation of units that are "visual-word-form selective" i.e. are more active for the visual presentation of words over other images. After ablating the units we test the effect on behavior in benchmarks testing reading and other control tasks
02.10.2025 12:04 β π 0 π 0 π¬ 1 π 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.
ποΈπ§ New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!
TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.
www.biorxiv.org/content/10.1...
27.09.2025 02:52 β π 90 π 24 π¬ 2 π 5
EPFL, ETH Zurich & CSCS just released Apertus, Switzerlandβs first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, itβs built for transparency, responsibility & the public good.
Read more: actu.epfl.ch/news/apertus...
02.09.2025 11:48 β π 53 π 26 π¬ 1 π 6
Action potential π 3 faculty opportunities to join EPFL neuroscience 1. Tenure Track Assistant Professor in Neuroscience go.epfl.ch/neurofaculty, 2. Tenure Track Assistant Professor in Life Sciences Engineering, or 3. Associate Professor (tenured) in Life Sciences Engineering go.epfl.ch/LSEfaculty
18.08.2025 08:46 β π 11 π 3 π¬ 1 π 0
Speakers and organizers of the GAC debate. Time and location of the GAC debate: 5 PM in Room C1.03.
Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social &β¬ @brittawestner.bsky.social asks:
π What benchmarks are useful for cognitive science? π
2025.ccneuro.org/gac
13.08.2025 07:00 β π 50 π 16 π¬ 1 π 1
As part of #CCN2025 our satellite event on Monday will explore how we can model the brain as a physical system, from topography to biophysical detail -- and how such models can potentially lead to impactful applications neuroailab.github.io/modeling-the-physical-brain. Join us! π§ͺπ§ π€
08.08.2025 19:21 β π 12 π 1 π¬ 0 π 0
this is all to say that I think it is very cool the idea of "diverse representations driven by a unified objective" is coming to fruition, and I find the consistently high performance and alignment of powerful video models a strong support for it
01.08.2025 07:59 β π 1 π 0 π¬ 1 π 0
which enables a fine-grain mapping of cortical space with a new multi-task relevance analysis; the accurate (R~0.5) prediction of second-by-second human brain activity, which makes us more confident in the characterization of action understanding pathways; and a couple more
01.08.2025 07:59 β π 1 π 0 π¬ 1 π 0
The mouse work is definitely relevant, will make sure to reference (apologies for the oversight). I do think there are substantial novelties that have only been made possible with more recent powerful video models: the tight relation to behavior and a variety of tasks,
01.08.2025 07:59 β π 1 π 0 π¬ 1 π 0
I don't know what the policy is for parallel discussions on BlueSky and X so I'll post twice for now π
01.08.2025 07:59 β π 1 π 0 π¬ 1 π 0
It was a steep climb in the "early days" (~2012) up the ImageNet gradient towards better visual system models. That tapped out ~2015 after resnet ...
But now w/ video models starting to perform, we can push forward again. Task-driven brain models ftw. amazing...
@mschrimpf.bsky.social
30.07.2025 15:03 β π 6 π 2 π¬ 0 π 0
Great work by @davidtyt.bsky.social with @akgokce.bsky.social, Khaled Jedoui, @dyamins.bsky.social (and me).
Check out the full thread for more details bsky.app/profile/davi... and of course the paper biorxiv.org/content/10.1... #NeuroAI #Vision #Neuroscience #AI
30.07.2025 15:42 β π 2 π 0 π¬ 0 π 0
Where models really shine is their ability to integrate disparate findings. Our findings not only recapitulate known brain structures, they also characterize action understanding pathways. The models help us make sense of hierarchy, behavioral relevance, and functional processing
30.07.2025 15:42 β π 1 π 0 π¬ 1 π 0
Brain-like computations support object and motion recognition that map onto classic visual ventral and dorsal streams. But looking deeper, we found a much more distributed computational landscape -- which may emerge from a single computational goal: modeling the visual world
30.07.2025 15:42 β π 1 π 0 π¬ 1 π 0
neuroscientist, psychiatrist, writer
optogenetics.org
karldeisseroth.org
https://www.amazon.com/Projections-Story-Emotions-Karl-Deisseroth/dp/1984853694
Assistant Professor in Neuroscience at the Donders Institute & Radboudumc.
Oscillations, language, the visual system, source reconstruction methods, and decoding. Open source enthusiast. https://britta-wstnr.github.io
Computational neuroscientist, NeuroAI lab @EPFL
Studying language in biological brains and artificial ones at the Kempner Institute at Harvard University.
www.tuckute.com
AI, Neuroscience and Music
EPFL Brain Mind Institute researchers develop & deploy technology to gain fundamental insight into brain & spinal cord systems, exploiting this knowledge for new therapies for brain disorders & towards novel intelligent machines https://go.epfl.ch/brain
PhD at EPFL π§ π»
Ex @MetaAI, @SonyAI, @Microsoft
Egyptian πͺπ¬
The Algonauts Project, first launched in 2019, is on a mission to bring biological and machine intelligence researchers together on a common platform to exchange ideas and pioneer the intelligence frontier.
https://algonautsproject.com/
Strengthening Europe's Leadership in AI through Research Excellence | ellis.eu
AI + security | Stanford PhD in AI & Cambridge physics | techno-optimism + alignment + progress + growth | πΊπΈπ¨πΏ
Cognitive neuroscientist studying visual and social perception. Asst Prof at JHU Cog Sci. She/her
As a hub for artificial intelligence, the EPFL AI Center leverages the extensive expertise of faculty and researchers across the Institution. It fosters a collaborative environment that nurtures multidisciplinary AI research, education, and innovation.
Helping machines make sense of the world. Asst Prof @icepfl.bsky.social; Before: @stanfordnlp.bsky.social @uwnlp.bsky.social AI2 #NLProc #AI
Website: https://atcbosselut.github.io/
Scientist, mentor, activist, explorer.
Biologist, McGill University
DeepMind Professor of AI @Oxford
Scientific Director @Aithyra
Chief Scientist @VantAI
ML Lead @ProjectCETI
geometric deep learning, graph neural networks, generative models, molecular design, proteins, bio AI, π πΆ