Interested in the latest advances in neuroscience (neural dynamics and internal models) and how they can be leveraged to build smarter, adaptive AI?
➡️ My first real solo piece 🖤🫶 @natneuro.nature.com
rdcu.be/eWVmA
@initself.bsky.social
neuromantic - ML and cognitive computational neuroscience - PhD student at Kietzmann Lab, Osnabrück University. ⛓️ https://init-self.com
Interested in the latest advances in neuroscience (neural dynamics and internal models) and how they can be leveraged to build smarter, adaptive AI?
➡️ My first real solo piece 🖤🫶 @natneuro.nature.com
rdcu.be/eWVmA
When and why do modular representations emerge in neural networks?
@stefanofusi.bsky.social and I posted a preprint answering this question last year, and now it has been extensively revised, refocused, and generalized. Read more here: doi.org/10.1101/2024... (1/7)
🚨new work with the dream team @danakarca.bsky.social @loopyluppi.bsky.social @fatemehhadaeghi.bsky.social @stuartoldham.bsky.social @duncanastle.bsky.social
We use game theory and show the brain is not optimally wired for communication and there’s more to its story:
www.biorxiv.org/content/10.6...
Brains have many pathways / subnetworks but which principles underlie their formation?
In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts models🧵
#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813
In this piece for @thetransmitter.bsky.social, I argue that ecological neuroscience should leverage generative video and interactive models to simulate the world from animals' perspectives.
The technological building blocks are almost here - we just need to align them for this application.
🧠🤖
Looking forward!
03.12.2025 13:23 — 👍 6 🔁 0 💬 0 📌 0Congrats! ✨
25.11.2025 22:05 — 👍 2 🔁 0 💬 1 📌 0🚨 Out in Patterns!
We asked ourselves, if complex neural dynamics like predictive remapping and allocentric coding can emerge from simple physical principles, in this case Energy Efficiency. Turns out they can!
More information in the 🧵 below.
I am super excited to see this one out in the wild.
Y’all are reading this paper in the wrong way.
We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:
This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA
It’s quite the opposite!
(thread)
Congrats Thomas! Great to see this out :)
21.11.2025 18:25 — 👍 2 🔁 0 💬 1 📌 0What happens if you hook up an energy-efficiency optimising RNN on active vision input?
It learns predictive remapping and path integration into allocentric scene coordinates.
Now out in patterns: www.cell.com/patterns/ful...
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715
+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
6. The AI scientist took 45 minutes and $8.25 in LLM tokens to find a new tuning equation that fits the data better, and predicts the population code’s high-dimensional structure – even though we had only tasked it to model single-cell tuning.
14.11.2025 18:07 — 👍 12 🔁 3 💬 1 📌 0New preprint led by @pablooyarzo.bsky.social together with @kohitij.bsky.social, Diego Vidaurre & Radek Cichy.
Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.
www.biorxiv.org/content/10.1... (1/n)
Thanks! We’ll put the code and chat interface out soon :)
04.11.2025 16:27 — 👍 1 🔁 0 💬 0 📌 0Congratulations!!
04.11.2025 13:52 — 👍 1 🔁 0 💬 0 📌 0Thanks! 🧠✨
03.11.2025 17:23 — 👍 2 🔁 0 💬 1 📌 0Thanks to all coauthors! 🦾
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social
/fin
We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces.
Preprint: www.arxiv.org/abs/2509.23941
CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n
03.11.2025 15:17 — 👍 5 🔁 0 💬 1 📌 1Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n
03.11.2025 15:17 — 👍 2 🔁 0 💬 1 📌 0What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n
03.11.2025 15:17 — 👍 2 🔁 1 💬 1 📌 0By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n
03.11.2025 15:17 — 👍 1 🔁 0 💬 1 📌 0To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n
03.11.2025 15:17 — 👍 2 🔁 0 💬 1 📌 0Generative language models are revolutionizing human-machine interaction. Importantly, such systems can now reason cross-modally (e.g. vision-language models). Can we do the same with neural data - i.e., can we build brain-language models with comparable flexibility? 2/n
03.11.2025 15:17 — 👍 2 🔁 0 💬 1 📌 0Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan 🧠💬
1/n
In his article “Mysterium Iniquitatis of Sinful Man Aspiring into the Place of God” which is a very sane title (contra its contents ofc)
20.10.2025 20:40 — 👍 1 🔁 0 💬 0 📌 0“To the theoretical question, "Can you design a machine to do whatever a brain can do?" the answer is this: "If you will specify in a finite and unambiguous way what you think a brain does do with information, then we can design a machine to do it." Pitts and I have proved this constructively. But can you say what you think brains do?”
Warren McCulloch
20.10.2025 20:34 — 👍 6 🔁 0 💬 1 📌 1Figuring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field.
#neuroskyence
www.thetransmitter.org/the-big-pict...
selfie 😎
pretty Princeton library with stained glass windows
Wow, peak library experience at Princeton!
Looking forward to a week of the “Automated Scientific Discovery of Mind and Brain” workshop - where I will also present my work on CorText and brain-language fusion 🧠