Christina Sartzetaki's Avatar

Christina Sartzetaki

@sargechris.bsky.social

PhD candidate @ UvA ๐Ÿ‡ณ๐Ÿ‡ฑ, ELLIS ๐Ÿ‡ช๐Ÿ‡บ | {video, neuro, cognitive}-AI Neural networks ๐Ÿค– and brains ๐Ÿง  watching videos ๐Ÿ”— https://sites.google.com/view/csartzetaki/

73 Followers  |  99 Following  |  13 Posts  |  Joined: 12.11.2024  |  2.1089

Latest posts by sargechris.bsky.social on Bluesky

Preview
Language Models in Plato's Cave Why language models succeeded where video models failed, and what that teaches us about AI

sergeylevine.substack.com/p/language-m...

11.06.2025 11:47 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Excited to be presenting this paper at #ICLR2025 this week!
Come to the poster if you want to know more about how human brains and DNNs process video ๐Ÿง ๐Ÿค–

๐Ÿ“† Sat 26 Apr, 10:00-12:30 - Poster session 5 (#64)
๐Ÿ“„ openreview.net/pdf?id=LM4PY...
๐ŸŒ sergeantchris.github.io/hundred_mode...

23.04.2025 10:57 โ€” ๐Ÿ‘ 10    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

New preprint (#neuroscience #deeplearning doi.org/10.1101/2025...)! We trained 20 DCNNs on 941235 images with varying scene segmentation (original. object-only, silhouette, background-only). Despite object recognition varying (27-53%), all networks showed similar EEG prediction.

15.03.2025 13:55 โ€” ๐Ÿ‘ 16    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

โœจ The VIS Lab at the #University of #Amsterdam is proud and excited to announce it has #TWELVE papers ๐Ÿš€ accepted for the leading #AI-#makers conference on representation learning ( #ICLR2025 ) in Singapore ๐Ÿ‡ธ๐Ÿ‡ฌ. 1/n
๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡ @ellisamsterdam.bsky.social

03.02.2025 07:44 โ€” ๐Ÿ‘ 17    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Excited to announce that this has been accepted in ICLR 25!

24.01.2025 20:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
The Algonauts Project 2025 homepage

(1/4) The Algonauts Project 2025 challenge is now live!

Participate and build computational models that best predict how the human brain responds to multimodal movies!

Submission deadline: 13th of July.

#algonauts2025 #NeuroAI #CompNeuro #neuroscience #AI

algonautsproject.com

06.01.2025 10:08 โ€” ๐Ÿ‘ 37    ๐Ÿ” 27    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3

9/ This is our first research output in this interesting new direction and Iโ€™m actively working on this - so stay tuned for updates and follow-up works!โ€จFeel free to discuss your ideas and opinions with me โฌ‡๏ธ

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

8/ ๐ŸŽฏ With this work we aim to forge a path that widens our understanding of temporal and semantic video representations in brains and machines, ideally leading towards more efficient video models and more mechanistic explanations of processing in the human brain.

11.12.2024 16:13 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

7/ We report a significant negative correlation of model FLOPs to alignment in several high-level brain areas, indicating that computationally efficient neural networks can potentially produce more human-like semantic representations.

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

6/ Training dataset biases related to a certain functional selectivity (e.g. face features) can be transferred in brain alignment with the respective functionally selective brain area (e.g. face region FFA).

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

5/ Comparing model architectures, CNNs exhibit a better hierarchy overall (with a clear mid-depth peak for early regions and gradual improvement as depth increases for late regions). Transformers however, achieve an impressive correlation to early regions even from one tenth of layer depth.

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

4/ We decouple temporal modeling from action space optimization by adding image action recognition models as control. Our results show that temporal modeling is key for alignment to early visual brain regions, while a relevant classification task is key for alignment to higher-level regions.

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

3/ We disentangle 4 factors of variation (temporal modeling, classification task, architecture, and training dataset) that affect model-brain alignment, which we measure by conducting Representational Similarity Analysis (RSA) across multiple brain regions and model layers.

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

2/ We take a step in this direction by performing a large-scale benchmarking of models on their representational alignment to the recently released Bold Moments Dataset of fMRI recordings from humans watching videos.

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

1/ Humans are very efficient in processing continuous visual input, neural networks trained to process videos are still not up to that standard.โ€จWhat can we learn from comparing the internal representations of the two systems (biological and artificial)?

11.12.2024 16:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
One Hundred Neural Networks and Brains Watching Videos: Lessons from Alignment What can we learn from comparing video models to human brains, arguably the most efficient and effective video processing systems in existence? Our work takes a step towards answering this question by...

๐Ÿ“ข New preprint!

We benchmark 99 image and video models ๐Ÿค– on brain representational alignment to fMRI data of 10 humans ๐Ÿง  watching videos!
Hereโ€™s a quick breakdown:๐Ÿงตโฌ‡๏ธ

www.biorxiv.org/content/10.1...

11.12.2024 16:13 โ€” ๐Ÿ‘ 10    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Post image

After a great conference in Boston, CCN is going to take place in Amsterdam in 2025! To help the exchange of ideas between #neuroscience, cognitive science, and #AI, CCN will for the first time have full length paper submissions (alongside the established 2 pagers)! Info below๐Ÿ‘‡
#NeuroAI #CompNeuro

12.11.2024 09:27 โ€” ๐Ÿ‘ 165    ๐Ÿ” 83    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 13

@sargechris is following 20 prominent accounts