With the examiners blurred in the background at least we can see Jacob smiling in full view!
04.12.2025 19:05 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@dimadamen.bsky.social
Professor of Computer Vision, @BristolUni. Senior Research Scientist @GoogleDeepMind - passionate about the temporal stream in our lives. http://dimadamen.github.io
With the examiners blurred in the background at least we can see Jacob smiling in full view!
04.12.2025 19:05 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Congratulations Jacob Chalk who passed his PhD viva @compscibristol.bsky.social on
"Leveraging Multimodal Data for Egocentric Video Understanding" w no corrections
๐ in ICASSP23 CVPR24 CVPR25 3DV25 TPAMI25
jacobchalk.github.io
๐examiners @hildekuehne.bsky.social @andrewowens.bsky.social &Wei-Hong Li
Super-exciting talk by Ani Kembhavi from Wayve AI @bristoluni.bsky.social @compscibristol.bsky.social #MaVi Seminar today!
World models for evaluating autonomous driving, GAIA3 released! End-to-end driving model &loads of insights!
Thanks for visiting &spending the day talking to researchers.
Seeing without Pixels: Perception from Camera Trajectories
Zihui Xue, Kristen Grauman @dimadamen.bsky.social Andrew Zisserman, Tengda Han
tl;dr: in title. I love such "blind baseline" papers.
arxiv.org/abs/2511.21681
It was exciting to attend SC'25 #SC25 in #StLoius #Missouri & visit @bristoluni.bsky.social @compscibristol.bsky.social #BriCS Bristol Centre for Supercomputing (BriCS) stand showcasing our fantastic #Isambard_AI - 11th fastest supercomputer globally.
pics w @simonmcs.bsky.social and Sadaf Alam
๐ฎ๐ญ๐คฏ๐ค
13.11.2025 12:16 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0Check out Leonie's (@bossemel.bsky.social) upcoming NeurIPS Datasets and Benchmarks paper about a really interesting new dataset for evaluating models of human visual learning.
12.11.2025 15:56 โ ๐ 10 ๐ 1 ๐ฌ 0 ๐ 0Bad reviews AND bad weather here... we have an unfair part of the deal...
More than bad reviews, I am mostly frustrated by the unkind messages reviewers give... If you don't agree with how we did something, that's your right, but stating it's unreasonable or ridiculous is not within your right!
Prof. @tokehoye.bsky.social (Aarhus University) and I have an open PhD position (jointly advised) on biodiversity monitoring with camera trap networks. Deadline: 15-Jan-2026
Please help us share this post among students you know with an interest in Machine Learning and Biodiversity! ๐ค๐ชฒ๐ฑ
OK!!!!! ๐ถโ๐ซ๏ธ
06.11.2025 20:49 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0"Sliding is all you need" (aka "What really matters in image goal navigation") has been accepted to 3DV 2026 (@3dvconf.bsky.social) as an Oral presentation!
By Gianluca Monaci, @weinzaepfelp.bsky.social and myself.
@naverlabseurope.bsky.social
Because in other countries, particularly in the US, PhD students are not paid to be students over the summer and need to take jobs/internships. Only in Europe do we pay staff and students salaries for all 12 months.
Apologies if that's incorrect but this is what I was told.
Have a question for a #CVPR2026 organizer? Use the form.
Form: support.conferences.computer.org/cvpr/help-desk
Trained visibility head shows impressive performance in identifying camera motion and tracking dynamic objects jointly...
We match Co-Tracker3 on RoboTAP and outperform it on EgoPoints and RGB-S only through correspondences
Code and Models are out
Work led by Rhodri Guerrier w Adam W Harley
3/3
Through balancing static and dynamic correspondences, the model can maintain MASt3R's power in tracking static parts of the scene but also track dynamic points successfully.
We use *no* temporal knowledge (no windows) - only pairwise matching!
rhodriguerrier.github.io/PointSt3R/
2/3
๐ New Paper
PointSt3R: Point Tracking through 3D Grounded Correspondence
arxiv.org/abs/2510.26443
Can point tracking be re-formulated as pairwise frame correspondence solely?
We fine-tuning MASt3R with dynamic correspondences and a visibility loss and achieve competitive point tracking results
1/3
PointSt3R: Point Tracking through 3D Grounded Correspondence
R. Guerrier, @adamharley.bsky.social, @dimadamen.bsky.social
Bristol/Meta
rhodriguerrier.github.io/PointSt3R/
๐ขNew in ScanNet++: High-Res 360ยฐ Panos!
Chandan Yeshwanth and Yueh-Cheng Liu have added pano captures for 956 ScanNet++ scenes, fully aligned with the 3D meshes, DSLR, and iPhone data - multiple panos per scene
Check it out:
Docs kaldir.vc.in.tum.de/scannetpp/do...
Code github.com/scannetpp/sc...
Special thanks to @elliottwu.bsky.social for visiting
@bristoluni.bsky.social to give a #MaVi for a seminar: From Pixels to 3D Motion
We enjoyed your visit! Thanks for staying through for all 1-1s with the researchers.
The ELLIS Society welcomes its new Board. As the primary decision-making body, it will play a vital role in shaping the future of ELLIS and advancing #AI and #MachineLearning research across Europe in a time of global change.
Read the full article: ellis.eu/news/ellis-s...
Nice writeup in @caltech.edu news about the impact of the #Visipedia project in Computer Vision and Citizen Science
24.10.2025 21:48 โ ๐ 8 ๐ 2 ๐ฌ 1 ๐ 0We have a new sequence model for robotics, which will be presented at #NeurIPS2025:
Kinaema: A recurrent sequence model for memory and pose in motion
arxiv.org/abs/2510.20261
By @mbsariyildiz.bsky.social, @weinzaepfelp.bsky.social, G. Bono, G. Monaci and myself
@naverlabseurope.bsky.social
1/9
A reminder which might be relevant now: we are looking to hire a senior research scientist in Robotics at @naverlabseurope.bsky.social in Grenoble, France.
23.10.2025 18:19 โ ๐ 23 ๐ 6 ๐ฌ 0 ๐ 0EuroHPC wrote a piece about our love of GPUs ๐
A EuroHPC Success Story | Clear Vision for Self-Driving Cars
www.eurohpc-ju.europa.eu/eurohpc-succ...
As part of the visit, Francois examined PhD candidate, Kevin Flanagan.
Congrats to Dr Kevin and first advisor Michael Wray on a career achievement. Coincidently, Kevin also received the Outstanding Reviewer Award @neuripsconf.bsky.social #NeuriPS2025 on the day of his viva!
#ProudAdvisor
2/2
Many thanks to Francois Bremond (INRIA Sophia-Antipolis) for a 2-day #Bristol visit @bristoluni.bsky.social #MaVi (Machine Learning and Computer Vision) research group.
Great presentation incl. #CVPR2025 and #ICCV2025 papers from Francois's group, sharing insights and future directions
1/2
Say hello to @margretkeuper.bsky.social, Prof at Uni Mannheim ๐ฉ๐ช and ELLIS Fellow at ELLIS Unit Saarbrรผcken. Her area of research is machine learning for computer vision.
Her advice: When your research goals are much harder than anticipated, take a step back to see the big picture.
#WomenInELLIS
We have a new internship position open in our team at Naver Labs Europe, on AI for robotics: manipulation using 3D foundation models.
@naverlabseurope.bsky.social
This is a collaboration with Sorbonne University/ISIR (Nicolas Thome)
You can apply online:
careers.werecruit.io/en/naver-lab...
Thanks 2 organisers of DataCV and Ego360 workshops for inviting me to give remote talks @iccv.bsky.social this morning.
To catch up #ICCV2025, the slides for both talks are available:
* Creating a CV Dataset in 2025
* Video Understanding Out of the frame
dimadamen.github.io/talks.html
We round off #iccv25 with highlight paper Geo4D on Thur
TLDR: Geo4D repurposes a video diffusion model to reconstruct dynamic scenes in 4D
Paper: arxiv.org/abs/2504.07961
Project: geo4d.github.io
@chuanxiaz.bsky.social @dlarlus.bsky.social rlus.bsky.social @oxford-vgg.bsky.social gg.bsky.social
๐งต9