Bart Duisterhof's Avatar

Bart Duisterhof

@bardienus.bsky.social

PhD Student @cmurobotics.bsky.social with @jeff-ichnowski.bsky.social || DUSt3R Research Intern @naverlabseurope || 4D Vision for Robot Manipulation πŸ“· He/Him - https://bart-ai.com

2,238 Followers  |  691 Following  |  28 Posts  |  Joined: 18.11.2024
Posts Following

Posts by Bart Duisterhof (@bardienus.bsky.social)

RaySt3R was accepted to NeurIPS! Check out the HuggingFace demo for image to 3D in cluttered scenes huggingface.co/spaces/bartd...

19.09.2025 17:28 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

In "hearing the slide"πŸ‘‚ (led by @yuemin-mao.bsky.social ) we estimate *loss* of contact with a contact microphone, and use it to learn dynamic constraints.⚑ It allows moving multiple intricate objects🍷 efficiently, even objects that would otherwise be hard to grasp. fast-non-prehensile.github.io

12.06.2025 15:32 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
GitHub - naver/pow3r Contribute to naver/pow3r development by creating an account on GitHub.

For which the code is also available github.com/naver/pow3r

12.06.2025 13:41 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Preview
GitHub - naver/dune: Code repository for "DUNE: Distilling a Universal Encoder from Heterogeneous 2D and 3D Teachers" Code repository for "DUNE: Distilling a Universal Encoder from Heterogeneous 2D and 3D Teachers" - naver/dune

Thanks Christian for the advertisement.

github link: github.com/naver/dune

06.06.2025 13:01 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - Duisterhof/rayst3r Contribute to Duisterhof/rayst3r development by creating an account on GitHub.

πŸ”— Project Website: rayst3r.github.io
πŸ“„ arXiv: arxiv.org/abs/2506.05285
πŸš€ Code: github.com/Duisterhof/...
πŸ€— HF Demo: Coming (very) soon!

@CMU_Robotics @SCSatCMU @nvidia @NVIDIAAI @NVIDIARobotics

06.06.2025 13:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Big thanks to the awesome contributors to this project!πŸ‘ Jan Oberst, @bowenwen_me, @BirchfieldStan, @RamananDeva and @jeff_ichnowski. Also thanks to OctMAE author @s1wase, @nvidia for sponsoring compute πŸ–₯️, and the scientists at @naverlabseurope for the inspiration! πŸ§—β€β™‚οΈ

06.06.2025 13:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We also study the impact of the confidence threshold on reconstruction quality. Our ablations suggest setting a higher confidence threshold improves accuracy, while limiting completeness and edge-bleeding. Users can tune the threshold for application-specific requirements πŸŽ›οΈ.

06.06.2025 13:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

We evaluate RaySt3R against the baselines on synthetic and real-world datasets. The results suggest RaySt3R achieves zero-shot generalization to the real world, and outperforms all baselines by up to 44% in 3D chamfer distance πŸš€.

06.06.2025 13:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We train RaySt3R by curating a new dataset, for a total of 12 million views πŸ“· with Objaverse and GSO objects. The ablations πŸ” suggest that more and more diverse data improves RaySt3R's performance. RaySt3R does not require GT meshes, paving the way for training on real-world data.

06.06.2025 13:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

πŸ’‘ Our key insight is that 3D object shape completion can be recasted as a novel-view synthesis problem. RaySt3R takes a masked RGB-D image as input, and predicts depth maps and object masks for novel views. We query multiple views and merge the predictions into a consistent point cloud.

06.06.2025 13:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We focus on multi-object 3D shape completion for robotics. Robots are commonly equipped with a RGB-D camera πŸ“·, but their measurements are noisy and incomplete.

Using only DINOv2 features πŸ¦– as pretraining, we train a new model (RaySt3R) to produce accurate geometry.

06.06.2025 13:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Imagine if robots could fill in the blanks in cluttered scenes.

✨ Enter RaySt3R: a single masked RGB-D image in, complete 3D out.
It infers depth, object masks, and confidence for novel views, and merges the predictions into a single point cloud. rayst3r.github.io

06.06.2025 13:51 β€” πŸ‘ 24    πŸ” 3    πŸ’¬ 1    πŸ“Œ 2

Do you think Europe will take the opportunity? The Netherlands is even cutting research funds under the new administration... It feels like there are still significantly more opportunities in the US.

26.03.2025 15:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks Chris! This was a push with the entire dust3r team @naverlabseurope.bsky.social, congrats everyone!

26.03.2025 15:15 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The Best Student Paper Award goes to MASt3R-SfM! #3DV2025

26.03.2025 01:24 β€” πŸ‘ 42    πŸ” 8    πŸ’¬ 0    πŸ“Œ 2
Video thumbnail

πŸŽ‰Excited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care πŸ’‡πŸ»πŸ’†πŸΌ that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check our work: moehair.github.io @cmurobotics.bsky.social 🧡1/7

17.03.2025 16:02 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Post image Post image Post image Post image

MUSt3R: Multi-view Network for Stereo 3D Reconstruction

Yohann Cabon, Lucas Stoffl, Leonid Antsfeld, Gabriela Csurka, Boris Chidlovskii, Jerome Revaud, @vincentleroy.bsky.social

tl;dr: make DUSt3R symmetric and iterative+multi-layer memory mechanism->multi-view DUSt3R

arxiv.org/abs/2503.01661

04.03.2025 08:26 β€” πŸ‘ 25    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Great news, CMU's Center for Machine Learning and Health (CMLH) decided to fund another year of our research! If you're a PhD student at CMU, consider applying for the next iterations of the fellowship - the funding is generous and relatively unconstrained :)

31.01.2025 20:28 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ˜†

13.01.2025 14:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Is the book just as good/better than the show for "The 3 body problem"?

05.12.2024 09:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
RI Seminar : Jeffrey Ichnowski : Learning for Dynamic Robot Manipulation of Deformable...
YouTube video by CMU Robotics Institute RI Seminar : Jeffrey Ichnowski : Learning for Dynamic Robot Manipulation of Deformable...

Watch Professor Jeff Ichnowski's RI seminar talk: "Learning for Dynamic Robot Manipulation of Deformable and Transparent Objects" πŸ¦ΎπŸ€–

@jeff-ichnowski.bsky.social closed out our Fall seminar series. Keep an eye out for the Spring schedule in the new year!

www.youtube.com/watch?v=DvvF...

26.11.2024 18:45 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Intro Post
Hello World!
I'm a 2nd year Robotics PhD student at CMU, working on distributed dexterous manipulation, accessible soft robots and sensors, sample efficient robot learning, and causal inference.

Here are my cute robots:
PS: Videos are old and sped up. They move slower in real-world :3

23.11.2024 18:49 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

My growing list of #computervision researchers on Bsky.

Missed you? Let me know.

go.bsky.app/M7HGC3Y

19.11.2024 23:00 β€” πŸ‘ 131    πŸ” 42    πŸ’¬ 88    πŸ“Œ 9
Preview
GitHub - BerkeleyAutomation/FogROS2: An Adaptive and Extensible Platform for Cloud and Fog Robotics Using ROS 2 An Adaptive and Extensible Platform for Cloud and Fog Robotics Using ROS 2 - BerkeleyAutomation/FogROS2

My advisor @jeff-ichnowski.bsky.social! For example: github.com/BerkeleyAuto...

22.11.2024 21:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

For international students: renewing your visa asap might be a good idea.

22.11.2024 20:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My lab mate @yuemin-mao.bsky.social :)

22.11.2024 17:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Welcome to all new arrivals here on Bluesky! :) Here's a starter pack of people working on computer vision.
go.bsky.app/PkAKJu5

17.11.2024 08:05 β€” πŸ‘ 96    πŸ” 34    πŸ’¬ 21    πŸ“Œ 4

After my general computer vision starter pack is now full (150/150 entries reached), here is one specific to 3D Vision: go.bsky.app/Cfm9XFe

21.11.2024 08:15 β€” πŸ‘ 105    πŸ” 29    πŸ’¬ 10    πŸ“Œ 1

Check out this work by my lab mates: learning dynamic tasks using a soft robotic hand!

21.11.2024 08:20 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Thank you for making the list! Could you add me as well? I work on vision for robot manipulation :)

20.11.2024 08:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0