Arno Onken's Avatar

Arno Onken

@arnoonken.bsky.social

Lecturer at the University of Edinburgh, interested in probabilistic and machine learning methods for modeling and analyzing neural activity.

71 Followers  |  121 Following  |  2 Posts  |  Joined: 14.01.2025  |  1.4097

Latest posts by arnoonken.bsky.social on Bluesky

Preview
Movie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex Understanding how the brain encodes complex, dynamic visual stimuli remains a fundamental challenge in neuroscience. Here, we introduce ViV1T, a transformer-based model trained on natural movies to pr...

The model also works with datasets containing a few hundred neurons from different animals and laboratories. There is more good stuff in the appendix of the paper and the code repository!

Paper: www.biorxiv.org/content/10.1...
Code and model weights: github.com/bryanlimy/Vi...

7/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We sincerely thank Turishcheva & Fahey et al. (2023) for organising the Sensorium challenge(s!) and for making their high-quality, large-scale mouse V1 recordings publicly available, which made this work possible!

6/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

We compared our model against SOTA models from the Sensorium 2023 challenge and showed that ViV1T is the most performant while being more computationally efficient. We also evaluated the data efficiency of the model by varying the number of training samples and neurons.

5/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Moving beyond gratings, we used ViV1T to generate centre-surround most exciting videos (MEVs) via the Inception Loop (Walker et al. 2019). Our in vivo experiments confirmed that MEVs elicit stronger contextual modulation than gratings, natural images and videos, and most exciting images (MEIs).

4/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

ViV1T also revealed novel functional features. We found new properties of contextual responses to surround stimuli in V1 neurons, both movement- and contrast-dependent. We validated this in vivo!

3/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

ViV1T, only trained on natural movies, captured well-known direction tuning and contextual modulation of V1. Despite no built-in mechanism for modelling neuron connectivities, the model predicted feedback-dependent contextual modulation (including feedback onset delay!) (Keller et al. 2020).

2/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

We present our preprint on ViV1T, a transformer for dynamic mouse V1 response prediction. We reveal novel response properties and confirm them in vivo.

With @wulfdewolf.bsky.social, Danai Katsanevaki, @arnoonken.bsky.social, @rochefortlab.bsky.social.

Paper and code at the end of the thread!

๐Ÿงต1/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 17    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Thank you for the correction! It seems the pictures are showing the Anatomy Lecture Theatre whereas Burke was dissected in the Usha Kasera Lecture Theatre, which was known as the Old Anatomy Lecture Theatre. However, Burke's skeleton is on exhibition right next to the theatre shown in the pictures.

29.01.2025 12:46 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Yes, indeed! It is a lecture theatre of historical significance. In 1829, William Burke of murderers Burke and Hare was dissected in that lecture theatre. Burke and Hare murdered 16 people and sold their corpses for anatomy lectures.

29.01.2025 10:25 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@arnoonken is following 20 prominent accounts