#BMVC2025
24.11.2025 06:40 — 👍 2 🔁 0 💬 0 📌 0@bjoernmichele.bsky.social
Research Scientist | Naver Labs Europe | Prev.: Ph.D. Student @ valeo.ai & IRISA OBELIX | Interested in the intersection of computer vision and frugal learning. Website: bjoernmichele.com
#BMVC2025
24.11.2025 06:40 — 👍 2 🔁 0 💬 0 📌 0For more details
📝 Paper: bmva-archive.org.uk/bmvc/2025/a...
💻 Code: github.com/valeoai/muddos
This is a joint work with my great co-authors @alexandreboulch.bsky.social, @gillespuy.bsky.social, @tuanhungvu.bsky.social, Renaud Marlet, @ncourty.bsky.social and myself.
Key findings:
1️⃣ The LiDAR backbone architecture has a major impact on cross-domain generalization.
2️⃣ A single pretrained backbone can generalize to many domain shifts.
3️⃣ Freezing the pretrained backbone + training only a small MLP head gives the best results.
We systematically study how to best exploit vision foundation models (like DINOv2) for UDA on LiDAR data and identify practical “recipes” that consistently give strong performance across challenging real-world domain gaps.
24.11.2025 05:00 — 👍 1 🔁 0 💬 1 📌 0🚗🌐 Working on domain adaptation for 3D point clouds / LiDAR?
We'll present MuDDoS at BMVC: a method that boosts multimodal distillation for 3D semantic segmentation under domain shift.
📍 BMVC
🕚 Monday, Poster Session 1: Multimodal Learning (11:00–12:30)
📌 Hadfield Hall #859
and Aniruddha Kembhavi, Adrien Gaidon, Nicolas Mansard, and Justin Carpentier as afternoon ones
21.11.2025 20:38 — 👍 4 🔁 2 💬 1 📌 0One of those internships is on Gromov $\delta$-hyperbolicity for GNNs, and will be cosupervised together with Nicolas, myself and Laetitia Chapel. Take a look and spread the words !
07.11.2025 13:45 — 👍 10 🔁 3 💬 1 📌 0Happy to represent Ukraine at #ICCV2025 . Come see my poster today at 11:45 (#399)!
21.10.2025 19:35 — 👍 17 🔁 2 💬 0 📌 0Our recent research will be presented at @iccv.bsky.social! #ICCV2025
We’ll present 5 papers about:
💡 self-supervised & representation learning
🌍 3D occupancy & multi-sensor perception
🧩 open-vocabulary segmentation
🧠 multimodal LLMs & explainability
valeoai.github.io/posts/iccv-2...
Come say hi to our poster October 21st at 11:45 poster session 1 (#399)! We introduce unsupervised post-training of ViTs that enhances dense features for in-context tasks.
First conference as a PhD student, really excited to meet new people.
Aloha #iccv25 – here we come! Excited to be presenting new *St3R models PANSt3R, HAMSt3R & HOSt3R. We're also introducing ‘Geo4D' and ‘LUDVIG’ 🫢 giving invited talks and mentoring! Full @iccv.bsky.social
programme below (or tinyurl.com/asbn5b5d) 🧵1/9
Another great event for @valeoai.bsky.social team: a PhD defense of Corentin Sautier.
His thesis «Learning Actionable LiDAR Representations w/o Annotations» covers the papers BEVContrast (learning self-sup LiDAR features), SLidR, ScaLR (distillation), UNIT and Alpine (solving tasks w/o labels).
Thank you @skamalas.bsky.social ! Looking Forward to my Journey in Grenoble !
06.10.2025 22:15 — 👍 4 🔁 0 💬 0 📌 0So excited to attend the PhD defense of @bjoernmichele.bsky.social at @valeoai.bsky.social! He’s presenting his research results of the last 3 years in 3D domain adaptation: SALUDA (unsupervised DA), MuDDoS (multimodal UDA), TTYD (source-free UDA).
06.10.2025 12:18 — 👍 12 🔁 2 💬 2 📌 0It’s PhD graduation season in the team!
Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! 🚀
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏
Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social
@iccv.bsky.social
iccv.thecvf.com/Conferences/...
Discovered that our RangeViT paper keeps being cited in what might be LLM-generated papers. Number of citations increased rapidly in the last weeks. Too good to be true.
Papers popped up on different platforms, but mainly on ResearchGate with ~80 papers in just 3 weeks.
[1/]
SKADA-Bench : Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities, has been published published in TMLR today 🚀. It was a huge team effort to design (and publish) an open source fully reproducible DA benchmark 🧵1/n. openreview.net/forum?id=k9F...
29.07.2025 12:54 — 👍 16 🔁 7 💬 1 📌 01/ Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research.
21.07.2025 14:47 — 👍 85 🔁 21 💬 2 📌 3The visualisation of the shifts is really great! Although finishing a thesis on domain adaptation for 3D, these shifts in the formal definition always remain a bit abstract for me, whereas with the visualisation in the space(s) it is much clearer.
02.07.2025 07:18 — 👍 1 🔁 0 💬 0 📌 0The most important aspect when facing data shift is the type of shift present in the data. I will give below a few examples of shifts and some existing methods to compensate for it.🧵1/6
01.07.2025 09:38 — 👍 30 🔁 16 💬 2 📌 1I really enjoyed it! Generating the dataset myself, made it very easy to start and play with. Also while knowing on a high level the ideas of flow matching, it was great to do it once myself and to see also the steps in the code.
29.06.2025 14:56 — 👍 3 🔁 0 💬 1 📌 0I wrote a notebook for a lecture/exercice on image generation with flow matching. The idea is to use FM to render images composed of simple shapes using their attributes (type, size, color, etc). Not super useful but fun and easy to train!
colab.research.google.com/drive/16GJyb...
Comments welcome!
Looks great ! I am sure some of your colleagues in the lab would also be interested to have a look in a lunch break on these handhelds 😅
29.06.2025 10:15 — 👍 2 🔁 0 💬 1 📌 01/n 🚀New paper out - accepted at #ICCV2025!
Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding
Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
Going to the hospital because I broke my wrist smashing the endorse button:
www.understandingai.org/p/i-got-fool...
Thank you for higlighting this article. While it is written for AI-for-science, many of the author's remarks and statements are, in my opinion, also strongly reasonate with my own "AI subfield".
20.05.2025 09:34 — 👍 3 🔁 0 💬 0 📌 0Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!
cvpr.thecvf.com/Conferences/...
For me citing and a one-liner why we think that a quantitative comparison is not useful (as e.g. unclear how the results are obtained) had mostly not been challenged by the reviewers. But there were the one-line reviews, that put this as a major weakness which then also causes frustration.
29.04.2025 09:10 — 👍 1 🔁 0 💬 0 📌 0Our paper "LiDPM: Rethinking Point Diffusion for Lidar Scene Completion" got accepted to IEEE IV 2025!
tldr: LiDPM enables high-quality LiDAR completion by applying a vanilla DDPM with tailored initialization, avoiding local diffusion approximations.
Project page: astra-vision.github.io/LiDPM/