Victoria Bosch's Avatar

Victoria Bosch

@initself.bsky.social

neuromantic - ML and cognitive computational neuroscience - PhD student at Kietzmann Lab, Osnabrück University. ⛓️ https://init-self.com

623 Followers  |  428 Following  |  27 Posts  |  Joined: 05.10.2023  |  2.4463

Latest posts by initself.bsky.social on Bluesky

Preview
Leveraging insights from neuroscience to build adaptive artificial intelligence Nature Neuroscience - Adaptive intelligence envisions AI that, like animals, learns online, generalizes and adapts quickly. This Perspective reviews biological foundations, progress in AI and...

Interested in the latest advances in neuroscience (neural dynamics and internal models) and how they can be leveraged to build smarter, adaptive AI?

➡️ My first real solo piece 🖤🫶 @natneuro.nature.com

rdcu.be/eWVmA

31.12.2025 08:00 — 👍 117    🔁 34    💬 5    📌 1
Post image

When and why do modular representations emerge in neural networks?

@stefanofusi.bsky.social and I posted a preprint answering this question last year, and now it has been extensively revised, refocused, and generalized. Read more here: doi.org/10.1101/2024... (1/7)

09.01.2026 19:06 — 👍 74    🔁 18    💬 1    📌 2
Post image

🚨new work with the dream team @danakarca.bsky.social @loopyluppi.bsky.social @fatemehhadaeghi.bsky.social @stuartoldham.bsky.social @duncanastle.bsky.social
We use game theory and show the brain is not optimally wired for communication and there’s more to its story:
www.biorxiv.org/content/10.6...

15.12.2025 08:01 — 👍 60    🔁 26    💬 4    📌 0
Post image

Brains have many pathways / subnetworks but which principles underlie their formation?

In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts models🧵

#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813

21.11.2025 12:01 — 👍 37    🔁 12    💬 1    📌 0

In this piece for @thetransmitter.bsky.social, I argue that ecological neuroscience should leverage generative video and interactive models to simulate the world from animals' perspectives.

The technological building blocks are almost here - we just need to align them for this application.

🧠🤖

08.12.2025 15:59 — 👍 42    🔁 14    💬 0    📌 1

Looking forward!

03.12.2025 13:23 — 👍 6    🔁 0    💬 0    📌 0

Congrats! ✨

25.11.2025 22:05 — 👍 2    🔁 0    💬 1    📌 0

🚨 Out in Patterns!

We asked ourselves, if complex neural dynamics like predictive remapping and allocentric coding can emerge from simple physical principles, in this case Energy Efficiency. Turns out they can!
More information in the 🧵 below.

I am super excited to see this one out in the wild.

20.11.2025 19:47 — 👍 17    🔁 3    💬 3    📌 0

Y’all are reading this paper in the wrong way.

We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:

This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA

It’s quite the opposite!

(thread)

25.11.2025 16:16 — 👍 69    🔁 23    💬 3    📌 3

Congrats Thomas! Great to see this out :)

21.11.2025 18:25 — 👍 2    🔁 0    💬 1    📌 0

What happens if you hook up an energy-efficiency optimising RNN on active vision input?

It learns predictive remapping and path integration into allocentric scene coordinates.

Now out in patterns: www.cell.com/patterns/ful...

21.11.2025 08:01 — 👍 27    🔁 10    💬 1    📌 1
Preview
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...

🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14

18.11.2025 12:34 — 👍 83    🔁 28    💬 3    📌 5
Post image

6. The AI scientist took 45 minutes and $8.25 in LLM tokens to find a new tuning equation that fits the data better, and predicts the population code’s high-dimensional structure – even though we had only tasked it to model single-cell tuning.

14.11.2025 18:07 — 👍 12    🔁 3    💬 1    📌 0

New preprint led by @pablooyarzo.bsky.social together with @kohitij.bsky.social, Diego Vidaurre & Radek Cichy.

Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.

www.biorxiv.org/content/10.1... (1/n)

07.11.2025 09:39 — 👍 26    🔁 5    💬 1    📌 1

Thanks! We’ll put the code and chat interface out soon :)

04.11.2025 16:27 — 👍 1    🔁 0    💬 0    📌 0

Congratulations!!

04.11.2025 13:52 — 👍 1    🔁 0    💬 0    📌 0

Thanks! 🧠✨

03.11.2025 17:23 — 👍 2    🔁 0    💬 1    📌 0

Thanks to all coauthors! 🦾
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social

/fin

03.11.2025 15:17 — 👍 4    🔁 0    💬 0    📌 0
Preview
Brain-language fusion enables interactive neural readout and in-silico experimentation Large language models (LLMs) have revolutionized human-machine interaction, and have been extended by embedding diverse modalities such as images into a shared language space. Yet, neural decoding has...

We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces.

Preprint: www.arxiv.org/abs/2509.23941

03.11.2025 15:17 — 👍 5    🔁 0    💬 1    📌 0
Post image Post image

CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n

03.11.2025 15:17 — 👍 5    🔁 0    💬 1    📌 1
Post image

Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n

03.11.2025 15:17 — 👍 2    🔁 0    💬 1    📌 0
Post image

What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n

03.11.2025 15:17 — 👍 2    🔁 1    💬 1    📌 0

By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n

03.11.2025 15:17 — 👍 1    🔁 0    💬 1    📌 0

To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n

03.11.2025 15:17 — 👍 2    🔁 0    💬 1    📌 0

Generative language models are revolutionizing human-machine interaction. Importantly, such systems can now reason cross-modally (e.g. vision-language models). Can we do the same with neural data - i.e., can we build brain-language models with comparable flexibility? 2/n

03.11.2025 15:17 — 👍 2    🔁 0    💬 1    📌 0
Post image

Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n

03.11.2025 15:17 — 👍 129    🔁 52    💬 4    📌 8

In his article “Mysterium Iniquitatis of Sinful Man Aspiring into the Place of God” which is a very sane title (contra its contents ofc)

20.10.2025 20:40 — 👍 1    🔁 0    💬 0    📌 0
“To the theoretical question, "Can you design a machine to do whatever a brain can do?" the answer is this: "If you will specify in a finite and unambiguous way what you think a brain does do with information, then we can design a machine to do it." Pitts and I have proved this constructively. But can you say what you think brains do?”

“To the theoretical question, "Can you design a machine to do whatever a brain can do?" the answer is this: "If you will specify in a finite and unambiguous way what you think a brain does do with information, then we can design a machine to do it." Pitts and I have proved this constructively. But can you say what you think brains do?”

Warren McCulloch

20.10.2025 20:34 — 👍 6    🔁 0    💬 1    📌 1
Preview
Connecting neural activity, perception in the visual system Figuring out how the brain uses information from visual neurons may require new tools. I asked nine experts to weigh in.

Figuring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field.

#neuroskyence

www.thetransmitter.org/the-big-pict...

13.10.2025 13:23 — 👍 57    🔁 25    💬 3    📌 3
selfie 😎

selfie 😎

pretty Princeton library with stained glass windows

pretty Princeton library with stained glass windows

Wow, peak library experience at Princeton!
Looking forward to a week of the “Automated Scientific Discovery of Mind and Brain” workshop - where I will also present my work on CorText and brain-language fusion 🧠

29.09.2025 16:40 — 👍 10    🔁 0    💬 0    📌 0

@initself is following 20 prominent accounts