On a Thursday your best bet is probably Silbergold or Pracht
29.09.2025 16:13 — 👍 1 🔁 0 💬 2 📌 0@mariusschneider.bsky.social
Computational neuroscience Postdoc @bionicvisionlab.org @ucsantabarbara.bsky.social | former IMPRS PhD Student @Ernst Strüngmann Institute https://schneidermarius.github.io/
On a Thursday your best bet is probably Silbergold or Pracht
29.09.2025 16:13 — 👍 1 🔁 0 💬 2 📌 0Congrats!! & welcome to one of the best cities ;)
23.09.2025 16:09 — 👍 3 🔁 0 💬 1 📌 0Bet: this flavor of same-stimulus, same-task, compare-behavior, compare-physiology is the future of model testing and theory development in neuroscience.
22.09.2025 23:29 — 👍 16 🔁 4 💬 3 📌 1Thanks to the whole team behind Mouse vs AI:
@jingpeng.bsky.social , Yuchen Hou, Joe Canzano, @spencerlaveresmith.bsky.social & @mbeyeler.bsky.social !
A special shout-out to Joe, who designed the Unity task, trained the mice, and recorded the neural data — making this benchmark possible!
Submit your model. Compete against mice.
Which model architectures solve the task, and which find brain-like solutions?
Let us uncover what it takes to build robust, biologically inspired agents!
Read the whitepaper: arxiv.org/abs/2509.14446
Explore the challenge: robustforaging.github.io
Participants submit models trained only on robust foraging. We then regress from their model activations onto SOTA mesoscale population scale recordings including all mouse visual areas at once. This let us test whether brain-like codes emerge naturally from behavior-driven learning
🧠 Track 2: Neural Alignment
Do task-trained agents develop brain-like internal representations?
We regress model activations onto 19k+ neurons across all mouse visual areas recorded during the same task.
This lets us test whether brain-like codes emerge naturally from behavior-driven learning.
Track 1 tests generalization to unseen visual perturbations and ranks models based on how well they generalize (perform under these unseen perturbations)
🧭Track 1: Visual Robustness
Track 1 evaluates how well RL agents generalize to unseen visual perturbations, like different weather or lighting changes, while performing a foraging task.
Mice perform the same task in VR—enabling direct behavioral comparisons across species.
train mouse and AI on shared foraging task
We built a naturalistic Unity world (not blocky or toy-like) and trained mice to perform a foraging task in it.
Now it’s your turn: train RL agents on the same task and submit your model!
🚨Our NeurIPS 2025 competition Mouse vs. AI is LIVE!
We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like.
Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge!
Whitepaper: arxiv.org/abs/2509.14446
An illustration of a mouse on a ball looking at a field of grass displayed on monitors.
Can your AI beat a mouse?
Mice still outperform our best computing machines in some ways. One way is robust visual processing.
10 years ago I decided to work on HARD behavior driven by COMPLEX visual processing. And that risk is paying off now. #neuroAI 1/n
Here is your last reminder that the application deadline for Imbizo.Africa is nearing quickly, the 1st of July, in fact tomorrow. Still the place where diversity is at its best in the world! Tell all who need to hear. #africa #neuro
30.06.2025 19:53 — 👍 20 🔁 17 💬 0 📌 0Now out in @natcomms.nature.com: Mice and monkeys spontaneously shift through comparable cognitive states - and it's written all over their faces! (1/7)
www.nature.com/articles/s41...
A Brief History of Young People Today Don't Want to Work
🧵
Highly recommended for all neuroscientists who want to learn more about computational neuroscience in a supportive community!
09.05.2025 19:30 — 👍 2 🔁 1 💬 0 📌 0Re-posting is appreciated: We have a fully funded PhD position in CMC lab @cmc-lab.bsky.social (at @tudresden_de). You can use forms.gle/qiAv5NZ871kv... to send your application and find more information. Deadline is April 30. Find more about CMC lab: cmclab.org and email me if you have questions.
20.02.2025 14:50 — 👍 77 🔁 89 💬 3 📌 8Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈
Our new paper explores this: elifesciences.org/reviewed-pre...
Firing rates in visual cortex show representational drift, while temporal spike sequences remain stable
www.sciencedirect.com/science/arti...
Great work by Boris Sotomayor and with @battaglialab.bsky.social
Excited to share our new pre-print on bioRxiv, in which we reveal that feedback-driven motor corrections are encoded in small, previously missed neural signals.
07.04.2025 14:54 — 👍 25 🔁 16 💬 1 📌 1Excited for the poster session at #Cosyne tonight!
Come check out the two posters I am involved in—one on neural representations and generalization in mice & RL agents trained on the same foraging task, and another on selective attention in spiking networks.
Party poster for dance party on final night of Cosyne 2025 workshops. It will take place April 1st, 2025, 10PM to 3AM at Le P'tit Caribou.
Coming to the #Cosyne2025 workshops? Wanna dance on the final night? We got you covered.
@glajoie.bsky.social and I have organized a party in Tremblant. Come and get on the dance floor y'all. 🕺
April 1st
10PM-3AM
Location: Le P'tit Caribou
DJs Mat Moebius, Xanarelle, and Prosocial
Please share!
🧵 time!
1/15
Why are CNNs so good at predicting neural responses in the primate visual system? Is it their design (architecture) or learning (training)? And does this change along the visual hierarchy?
🧠🤖
🧠📈
Why academia is sleepwalking into self-destruction. My editorial @brain1878.bsky.social If you agree with the sentiments please repost. It's important for all our sakes to stop the madness
academic.oup.com/brain/articl...
Happy to see this study led by Irene Onorato finally out - we show distinct phase locking and spike timing of optotagged PV cells and Sst interneuron subtypes during gamma oscillations in mouse visual cortex, suggesting an update to the classic PING model www.sciencedirect.com/science/arti...
06.03.2025 22:22 — 👍 35 🔁 13 💬 1 📌 0
🚨Preprint Alert
New work with @martinavinck.bsky.social
We elucidate the architectural bias that enables CNNs to predict early visual cortex responses in macaques and humans even without optimization of convolutional kernels.
🧠🤖
🧠📈
Spiros Chavlis and I are very excited to share our latest #dendritic ANN research published in Nature Comms. @natureportfolio.bsky.social. We show that adopting the structure and sparse sampling features of biological #dendrites makes ANNs accurate, highly efficient and robust to overfitting.
25.01.2025 14:04 — 👍 42 🔁 11 💬 0 📌 2It’s shocking how much they’ve all decided to bow down and kiss the ring. Certainly, it has made it clear that if things get really bad, they will do nothing other than protect their own financial interests.
24.01.2025 14:33 — 👍 13 🔁 1 💬 3 📌 0What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @solarpunkgabs.bsky.social, finally published after the first preprint in 2021! 🤖🧠🧪
www.nature.com/articles/s41...
Serene, empty lecture hall with assigned seats (dark table-clothed desks with named mugs for every participants; white plastic chairs) that looks out onto a landscape of dunes and summery beaches, waiting to be filled with life of the next class of Imbizo.
... and so it begins again.🧠🌴🌊🌍 Imbizo.africa is on its way, ... without me 😥, but look! at the views through those windows.
I wanna go and learn about all things neuro at one of the most beautiful spots in the world. Don't you, too?