Cees Snoek's Avatar

Cees Snoek

@cgmsnoek.bsky.social

Head of Video & Image Sense Lab | University of Amsterdam | Scientific Director Amsterdam AI | https://www.ceessnoek.info/

349 Followers  |  392 Following  |  16 Posts  |  Joined: 17.11.2024
Posts Following

Posts by Cees Snoek (@cgmsnoek.bsky.social)

Video thumbnail

Robo Santa is here! πŸŽ…πŸ€– We introduce REALM, a real-to-sim validated benchmark for robot manipulation. High-fidelity simulation + aligned control = real-world proxy. Stress-test your policy! πŸ‘‡
Web martin-sedlacek.com/realm/#takea...
GH github.com/martin-sedla...
arXiv arxiv.org/abs/2512.19562
1/4

23.12.2025 14:04 β€” πŸ‘ 21    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Video thumbnail

πŸ“½οΈ Check out Visual Odometry Transformer! VoT is an end-to-end model for getting accurate metric camera poses from monocular videos.

vladimiryugay.github.io/vot/

07.10.2025 09:02 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ–ΌοΈ Most text-to-image models only really work in English.
This limits who can use them and whose imagination they reflect.

We asked: can we build a small, efficient model that understands prompts in multiple languages natively?

09.07.2025 13:27 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
De overheid heeft het nakijken met AI-supercomputers: β€˜We zijn overgeleverd aan big tech’ De AI-revolutie draait, behalve op grote hoeveelheden data, op gigantische en energieslurpende computerparken. In een paar jaar tijd is het krachtenveld verschoven van publieke naar commerciΓ«le partij...

Wie heeft toegang tot computerkracht tbv AI? Vooral big Tech. Een paar jaar geleden was dat nog de overheid. Illustratief: Musks xAI heeft 200.000 Nvidia-chips, de gehele Nederlandse wetenschap 650. www.volkskrant.nl/tech/de-over...

03.05.2025 09:13 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 2    πŸ“Œ 1
Post image

Congratulations dr. Tom van Sonsbeek πŸ₯³

15.03.2025 14:13 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Congratulations Dr. @phillip-lippe.bsky.social πŸ₯³

26.02.2025 22:07 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧠 Union-over-Intersections: Object Detection beyond Winner-Takes-All by Aritra Bhowmik, Pascal Mettes, Martin R. Oswald, Cees Snoek.

03.02.2025 07:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ€– TULIP: Token-length Upgraded CLIP by Ivona Najdenkoska, Mohammad Mahdi Derakhshani, Yuki Asano, Nanne van Noord, Marcel Worring, Cees Snoek. 13/n

03.02.2025 07:44 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

🧠 One Hundred Neural Networks and Brains Watching Videos: Lessons from Alignment by Christina Sartzetaki, Gemma Roig, Cees Snoek, Iris Groen. 12/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€– Near, far: Patch-ordering Enhances Vision Foundation Models' Scene Understanding by Valentinos Pariza, Mohammadreza Salehi, Gertjan Burghouts, Francesco Locatello, Yuki Asano. 11/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧠 Language Agents Meet Causality – Bridging LLMs and Causal World Models by John Gkountouras, Matthias Lindemann, Phillip Lippe, Efstratios Gavves, Ivan Titov. 10/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€– Grounding Continuous Representations in Geometry: Equivariant Neural Fields by David Wessels, David Knigge, Samuele Papa, Riccardo Valperga, Sharvaree Vadgama, Efstratios Gavves, Erik Bekkers. 9/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧠 DynaPrompt: Dynamic Test-Time Prompt Tuning by Zehao Xiao, Shilin Yan, Jack Hong, Jiayin Cai, Xiaolong Jiang, Yao Hu, Jiayi Shen, Cheems Wang, Cees Snoek. 8/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€– Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination by Leonardo Barcellona, Andrii Zadaianchuk, Davide Allegro, Samuele Papa, Stefano Ghidoni, Efstratios Gavves. 7/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧠 Compositional Entailment Learning for Hyperbolic Vision-Language Models by Avik Pal, Max van Spengler, Guido Maria D'Amely di Melendugno, Alessandro Flaborea, Fabio Galasso, Pascal Mettes. 6/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€– CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-Agent Cooperation, by Jie Liu, Pan Zhou, Yingjun Du, Ah-Hwee Tan, Cees Snoek, Jan-Jakob Sonke, Efstratios Gavves. 5/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧠 BrainACTIV: Identifying visuo-semantic properties driving cortical selectivity using diffusion-based image manipulation by Diego García Cerdas, Christina Sartzetaki, Magnus Petersen, Gemma Roig, Pascal Mettes, Iris Groen. 4/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€– An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels by Duy Kien Nguyen, Mahmoud Assran, Unnat Jain, Martin R. Oswald, Cees Snoek, Xinlei Chen. 3/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Congratulations and a shout-out to all our students and (inter)national collaborators. w/ our amazing staff:
Iris Groen, Pascal Mettes, Yuki Asano (now in πŸ‡©πŸ‡ͺ), @egavves.bsky.social .

The 12 papers and their authors below; full paper details, data sets, and source code to follow separately: 2/n

03.02.2025 07:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

✨ The VIS Lab at the #University of #Amsterdam is proud and excited to announce it has #TWELVE papers πŸš€ accepted for the leading #AI-#makers conference on representation learning ( #ICLR2025 ) in Singapore πŸ‡ΈπŸ‡¬. 1/n
πŸ‘‡πŸ‘‡πŸ‘‡ @ellisamsterdam.bsky.social

03.02.2025 07:44 β€” πŸ‘ 17    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“’ There is still time to submit your application for the 2nd ELLIS Winter School on Foundation Models in 2025 which will be held from 18-21 March 2025 in Amsterdam!

16.01.2025 12:34 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

We are excited to announce the 2nd ELLIS Winter School on Foundation Models in 2025, 18-21 March in Amsterdam. Secure your spot! ✨

πŸ”— Visit ivi.fnwi.uva.nl/ellis/events...

πŸš€ Apply forms.gle/bYbZi9J7NzCb...

#AI #ML #foundationmodels #ELLISforEurope #ELLISunitAmsterdam

13.12.2024 12:03 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
One Hundred Neural Networks and Brains Watching Videos: Lessons from Alignment What can we learn from comparing video models to human brains, arguably the most efficient and effective video processing systems in existence? Our work takes a step towards answering this question by...

πŸ“’ New preprint!

We benchmark 99 image and video models πŸ€– on brain representational alignment to fMRI data of 10 humans 🧠 watching videos!
Here’s a quick breakdown:πŸ§΅β¬‡οΈ

www.biorxiv.org/content/10.1...

11.12.2024 16:13 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 1    πŸ“Œ 2