Amr Farahat's Avatar

Amr Farahat

@amr-farahat.bsky.social

MD/M.Sc/PhD candidate @ESI_Frankfurt and IMPRS for neural circuits @MpiBrain. Medicine, Neuroscience & AI https://amr-farahat.github.io/

138 Followers  |  379 Following  |  27 Posts  |  Joined: 14.11.2024  |  2.3481

Latest posts by amr-farahat.bsky.social on Bluesky

Post image

New paper in Imaging Neuroscience by Tom DuprΓ© la Tour, Matteo Visconti di Oleggio Castello, and Jack L. Gallant:

The Voxelwise Encoding Model framework: A tutorial introduction to fitting encoding models to fMRI data

doi.org/10.1162/imag...

16.05.2025 02:08 β€” πŸ‘ 26    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0

(1/6) Thrilled to share our triple-N dataset (Non-human Primate Neural Responses to Natural Scenes)! It captures thousands of high-level visual neuron responses in macaques to natural scenes using #Neuropixels.

11.05.2025 13:33 β€” πŸ‘ 121    πŸ” 42    πŸ’¬ 2    πŸ“Œ 1
Preview
Integrating multimodal data to understand cortical circuit architecture and function - Nature Neuroscience This paper discusses how experimental and computational studies integrating multimodal data, such as RNA expression, connectivity and neural activity, are advancing our understanding of the architectu...

A Perspective on integrating multimodal data to understand cortical circuit architecture and function

@alleninstitute.bsky.social

www.nature.com/articles/s41...

11.04.2025 09:37 β€” πŸ‘ 20    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Brain implant translates thoughts to speech in an instant Improvements to brain–computer interfaces are bringing the technology closer to natural conversation speed.

Improvements to brain–computer interfaces are bringing the technology closer to natural conversation speed. www.nature.com/articles/d41...

01.04.2025 07:26 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Yes indeed. It probably has something to do with learning dynamics that favors increasing the complexity gradually. Or it could be that the loss landscape has edges between high and low complexity volumes

15.03.2025 13:31 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In AlexNet, however, the first layers are the most predictive. That's because they have bigger filters at earlier layers (see Miao and Tong 2024)

15.03.2025 12:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

V1 is usually predicted by more intermediate layers than early layers but it depends on the architecture of the model. In Cadena et al 2019 block3_conv1 in VGG19 was the most predictive. Early layers in VGG have very small receptive fields which makes it difficult to capture V1-like features.

15.03.2025 12:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This was the most predictive layer of V1 in the VGG16 model. Same for IT, it was block4_conv2.

15.03.2025 10:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

and then starts increasing again with further training to fit the target function. This is the most likely explanation for the initial drop in V1 prediction.

15.03.2025 10:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We also observed in separate experiments on the simple CNN models that the complexity of the models "resets" to a low value (lower than its random-weight complexity) after the first training epoch (likely using the linear part of the activation function)

15.03.2025 10:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks for your interest! Object recognition performance increases directly starting from the first training epoch and nevertheless V1 prediction drops considerably so this drop supports the non significance of object recognition training for V1.

15.03.2025 10:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The legend of the left plot was missing!

14.03.2025 17:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Neural responses in early, but not late, visual cortex are well predicted by random-weight CNNs with sufficient model complexity Convolutional neural networks (CNNs) were inspired by the organization of the primate visual system, and in turn have become effective models of the visual cortex, allowing for accurate predictions of...

read more here
www.biorxiv.org/content/10.1...

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

15/15
It is also important to use various ways to assess model strengths and weaknesses, not just one like prediction accuracy.

13.03.2025 21:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

14/15

Our results also emphasize the importance of rigorous controls when using black box models like DNNs in neural modeling. They can show what makes a good neural model, and help us generate hypotheses about brain computations

13.03.2025 21:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

13/15
Our results suggest that the architecture bias of CNNs is key to predicting neural responses in the early visual cortex, which aligns with results in computer vision, showing that random convolutions suffice for several visual tasks.

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

12/15
We found that random ReLU networks performed the best among random networks and only slightly worse than the fully trained counterpart.

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

11/15
Then we tested for the ability of random networks to support texture discrimination, a task known to involve early visual cortex. We created Texture-MNIST, a dataset that allows for training for two tasks: object (Digit) recognition and texture discrimination

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

10/15
We found that trained ReLU networks are the most V1-like concerning OS. Moreover, random ReLU networks were the most V1-like among random networks and even on par with other fully trained networks.

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

9/15
We quantified the orientation selectivity (OS) of artificial neurons using circular variance and calculated how their distribution deviates from the distribution of an independent dataset of experimentally recorded v1 neurons

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

8/15
ReLU was introduced to DNN models inspired by sparsity of biological neural systems and the i/o function of biological neurons.
To test its biological relevance, we looked for characteristic of early visual processing: orientation selectivity and the capacity to support texture discrimination

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

7/15
Importantly, these findings hold true both for firing rates in monkeys and human fMRI data, suggesting their generalizability.

13.03.2025 21:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

6/15
Even when we shuffled the trained weights of the convolutional filters, V1 models were way less affected than IT

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

5/15
This means that predicting responses in higher visual areas (e.g., IT, VO) strongly depends on precise weight configurations acquired through training in contrast to V1, highlighting the functional specialization of those areas.

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

4/15
We quantified the complexity of the models transformations and found that ReLU models and max pooling models had considerably higher complexity. Moreover, complexity explained substantial variance in V1 encoding performance in comparison to IT (63%) and VO (55%) (not shown here)

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

3/15
Surprisingly, we found out that even training simple CNN models directly on V1 data did not improve encoding performance substantially unlike IT. However, that was only true for CNNs using ReLU activation functions and/or max pooling.

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

2/15
We found that training CNNs for object recognition doesn’t improve V1 encoding as much as it does for higher visual areas (like IT in monkeys or VO in humans)! Is V1 encoding more about architecture than learning?

13.03.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧡 time!
1/15
Why are CNNs so good at predicting neural responses in the primate visual system? Is it their design (architecture) or learning (training)? And does this change along the visual hierarchy?
πŸ§ πŸ€–
πŸ§ πŸ“ˆ

13.03.2025 21:32 β€” πŸ‘ 34    πŸ” 7    πŸ’¬ 2    πŸ“Œ 0
Preview
Distinct roles of PV and Sst interneurons in visually induced gamma oscillations Gamma-frequency oscillations are a hallmark of active information processing and are generated by interactions between excitatory and inhibitory neuro…

Happy to see this study led by Irene Onorato finally out - we show distinct phase locking and spike timing of optotagged PV cells and Sst interneuron subtypes during gamma oscillations in mouse visual cortex, suggesting an update to the classic PING model www.sciencedirect.com/science/arti...

06.03.2025 22:22 β€” πŸ‘ 35    πŸ” 13    πŸ’¬ 1    πŸ“Œ 0

@amr-farahat is following 20 prominent accounts