Andrei Bursuc's Avatar

Andrei Bursuc

@abursuc.bsky.social

Research Scientist at valeo.ai | Teaching at Polytechnique, ENS | Alumni at Mines Paris, Inria, ENS | AI for Autonomous Driving, Computer Vision, Machine Learning | Robotics amateur โšฒ Paris, France ๐Ÿ”— abursuc.github.io

5,093 Followers  |  353 Following  |  404 Posts  |  Joined: 06.11.2024  |  1.599

Latest posts by abursuc.bsky.social on Bluesky

Links:
- paper: arxiv.org/abs/2506.09042
- project page: research.nvidia.com/labs/toronto...
- talk by @lealtaixe.bsky.social at CVPR WAD 2025 covering some of the findings in the paper: youtu.be/XKnqN17SoDE?...

06.08.2025 20:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image Post image

Training with such synthetic data improves performance on lane detection, 3d object detection (from cameras or from Lidar), policy learning

06.08.2025 20:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

You can train different forms of generative models (diffusion) and conditionings, including text: camera to lidar, hdmap to camera, camera to hdmap + lidar, single camera to multi-camera, ...
They have a unified architecture for point clouds and images

06.08.2025 20:38 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

They compile a driving dataset with multi-cameras and high quality annotations RDS-HQ of 750hours of 30 fps 6-camera videos. They release a subset of 5842 clips of 10 seconds with lane and box annotations, hdmap, camera extrinsics+intrinsics

06.08.2025 20:38 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The authors start looking at Lidar and train a Lidar tokenizer starting from Cosmos and range images. The key ingredients to make it work are: row repetition x 4 on range images, motion compensations, high precision on depth fp32

06.08.2025 20:37 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

It becomes difficult to keep track of all the Cosmos variants released out there.
Cosmos-Drive-Dreams: Scalable Synthetic Driving Data Generation with World Foundation Models

06.08.2025 20:37 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Wow! Romania got 4 gold medals at IOI 2025!
Same performance with the China and South Korea teams.

06.08.2025 20:10 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

You fall off so fast if you stop reading papers and writing code

02.08.2025 16:04 โ€” ๐Ÿ‘ 61    ๐Ÿ” 2    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 2
Post image Post image

Romanian mountain roads FTW

02.08.2025 18:55 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I wonder if people in 15 years will consider it inevitable to have automated tools making decisions in the reviewing process.

Here are a few disordered thoughts:
- reviewing pressure has changed from discarding the incorrect stuff to discarding the uninteresting stuff. Most papers I see are...

30.07.2025 07:54 โ€” ๐Ÿ‘ 9    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

Nice new look!
Is the beard to keep or just for the vacation?

30.07.2025 18:50 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

EurIPS includes a call for both Workshops and Affinity Workshops!
We look forward to making #EurIPS a diverse and inclusive event with you.

The submission deadlines are August 22nd, AoE.

More information at:
eurips.cc/call-for-wor...
eurips.cc/call-for-aff...

28.07.2025 08:51 โ€” ๐Ÿ‘ 34    ๐Ÿ” 19    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

Franca official code and pretrained models are up on github and pytorch hub! github.com/valeoai/franca
Eager to learn how will it be used.

28.07.2025 19:20 โ€” ๐Ÿ‘ 18    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Authors to reviewers:

28.07.2025 16:41 โ€” ๐Ÿ‘ 11    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Ah, not quite. It's social media for outdoor activities and sports. Wearables are optional :)

28.07.2025 13:37 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

There are quite a few academics there, though not like x/bsky. So far from my biased stats: ML folks tend to do more biking, while CV ones a bit more running. On summer, many add hiking :)

26.07.2025 11:22 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Actually most of my contacts are rather on the free variant. The paid one can give you some stats, but they're redundand if you have a Garmin watch or similar. It can be interesting for different types of training.
Otherwise you can see there lots of uses: hikes and trails are my favorite :)

26.07.2025 11:18 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

You know you're on vacation when you post more on Strava than you do on other social media

26.07.2025 06:11 โ€” ๐Ÿ‘ 9    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Ah, I got the joke just later. Dracula's castle is not far from here. I haven't seen that pub yet, it looks new for my ages ๐Ÿ™ƒ

22.07.2025 10:32 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

lol, not quite, at least not yet.
I actually took the photos during a night run upon arrival ๐Ÿ˜…
Lovely place indeed. Brasov is our base camp when in Romania.

22.07.2025 10:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

Home

22.07.2025 07:42 โ€” ๐Ÿ‘ 11    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

... but also quantitatively in unsupervised object discovery (with LOST, TokenCut), again were DINOv2R shined, and in other dense downstream tasks (linear segmentation, overclustering, in-context learning).
We thought that registers would be redundant here, but it might be worth checking out.

21.07.2025 20:46 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image

Thanks for the feedback Nicolas! Happy you like it. Great question!
To be honest, we were pretty happy with the results on dense tasks (both Matryoshka and RASA post-training add up) which were nice qualitatively (attention maps, PCA) in settings were registered excelled at ...

21.07.2025 20:43 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Today, we release Franca, a new vision Foundation Model that matches and often outperforms DINOv2.
The data, the training code and the model weights are open-source.

This is the result of a close and fun collaboration
@valeoai.bsky.social (in France) and @funailab.bsky.social (in Franconia)๐Ÿš€

21.07.2025 14:58 โ€” ๐Ÿ‘ 21    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

11/ A project of this scale would not have been possible without the generous HPC support from @gencifrance.bsky.social and the valuable assistance of @vobeckya.bsky.social. We also thank A. Dravid for their support with LOST evaluations.

21.07.2025 14:55 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Franca: Nested Matryoshka Clustering for Scalable Visual Representation Learning We present Franca (pronounced Fran-ka): free one; the first fully open-source (data, code, weights) vision foundation model that matches and in many cases surpasses the performance of state-of-the-art...

10/ Franca is a collaboration between @valeoai.bsky.social and @funailab.bsky.social w/ amazing collaborators S. Venkataramanan, V. Pariza, M. Salehi, L. Knobel, E. Ramzi, @spyrosgidaris.bsky.social and @yukimasano.bsky.social

Paper: arxiv.org/abs/2507.14137
Code: github.com/valeoai/Franca

21.07.2025 14:53 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

9/ On 3D awareness:
- Beats DINOv2 by +3%, under large viewpoint shifts on SPair-71K.
- Matches DINOv2 on NYUv2 Depth estimation dataset despite no distillation or NYUv2 pretraining.
- Outperforms SoTA vision encoders on novel view synthesis when probed with Gaussian Splatting.

21.07.2025 14:51 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

8/ Franca on Out-of-distribution detection:
- Benchmarked on 5 datasets: SSB-Hard, NINCO, iNaturalist, OpenImage-O, Texture
- Outperforms DINOv2 at scales (ViT-L, G) and competitive with ViT-B.
- Unlike DINOv2-B/L, Franca uses no distillation or curated LVD-142M pretraining

21.07.2025 14:51 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

7/ Franca shines in dense prediction tasks:
- Linear Segmentation: outperforming DINOv2-G by +2% mIoU on Pascal VOC
- Overclustering (assessing semantic alignment of spatial features): Franca achieves 2% and 10% gains over DINOv2 and Web-SSL.

21.07.2025 14:51 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

6/ On DAVIS (unseen during pretraining), PCA of patch features reveals a clear gap: DINOv2 produces noisy, fragmented segments; Franca generates dense, contour-aligned regions with consistent colors for similar partsโ€”demonstrating its emergent fine-grained understanding.

21.07.2025 14:50 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@abursuc is following 20 prominent accounts