Marcus Klasson's Avatar

Marcus Klasson

@marcusklasson.bsky.social

Perception Researcher at Ericsson, Sweden. https://marcusklasson.github.io/

205 Followers  |  198 Following  |  10 Posts  |  Joined: 04.12.2024  |  1.7243

Latest posts by marcusklasson.bsky.social on Bluesky

Home Martin Trapp - Assistant Professor in Machine Learning at KTH Royal Institute of Technology.

Want to work on Trustworthy AI? ๐Ÿš€

I'm seeking exceptional candidates to apply for the Digital Futures Postdoctoral Fellowship to work with me on Uncertainty Quantification, Bayesian Deep Learning, and Reliability of ML Systems.

The position will be co-advised by Hossein Azizpour or Henrik Bostrรถm.

02.10.2025 14:46 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering Gaussian splatting enables fast novel view synthesis in static 3D environments. However, reconstructing real-world environments remains challenging as distractors or occluders break the multi-view con...

Paper, videos, and code (nerfstudio) is available!
๐Ÿ“„ arxiv.org/abs/2411.19756
๐ŸŽˆ aaltoml.github.io/desplat/

Big ups to Yihao Wang, @maturk.bsky.social, Shuzhe Wang, Juho Kannala, and @arnosolin.bsky.social for making this possible during my time at @aalto.fi ๐Ÿ’™๐Ÿค

#AaltoUniversity #CVPR2025
[8/8]

13.06.2025 08:04 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

DeSplat has the same FPS and training time as vanilla 3DGS with some additional overhead for storing distractor Gaussians. Extend with MLPs or other models can also be done. Altering DeSplat to video remains to be explored, as distractors barely moving across images can be mistaken as static. [7/8]

13.06.2025 08:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

This decomposed splatting (DeSplat) approach explicitly separates distractors from static parts. Earlier methods (e.g. SpotlessSplats, WildGaussians) use loss masking of detected distractors to avoid overfitting, while DeSplat instead jointly reconstructs distractor elements.
[6/8]

13.06.2025 07:59 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Knowing how 3DGS treats distractors, we initialize a set of Gaussians close to every camera view for reconstructing view-specific distractors. The Gaussians initialized from the point cloud should reconstruct static stuff. These separately rendered images are alpha-blended during training.

[5/8]

13.06.2025 07:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In a viewer, you can see that these spurious artefacts are thin and are located close to the camera view. For the scene-overfitting approach in 3DGS, this makes sense since an object only appearing in one view must be located as close to the camera such that no other camera view can see it.

[4/8]

13.06.2025 07:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

This BabyYoda scene from RobustNeRF is similar to a crowdsourced scenario, where a set of static toys appear together with inconsistently-placed toys between the frames.

Vanilla 3DGS is quite robust here, but some views end up being rendered with spurious artefacts (right image).
[3/8]

13.06.2025 07:55 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Our goal is to learn a scene representation from images that include non-static objects we refer to as distractors. An example is crowdsourced images where different people appear at different locations in the scene, which creates multi-view inconsistencies between the frames.
[2/8]

13.06.2025 07:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ‘‹Interested in Gaussian splatting and removing dynamic content from images?

Our DeSplat is presented today at #CVPR2025 at Poster Session 1, ExHall D Poster #52.

Yihao will be there to present our fully splatting-based method for separating static and dynamic stuff in images.

๐Ÿงต[1/8]

13.06.2025 07:52 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

You woke up early in the morning jet-lagged and having a hard time deciding for a workshop today @cvprconference.bsky.social ?

Here's a reliable choice for you: our workshop on ๐Ÿ›Ÿ Uncertainty Quantification for Computer Vision!

๐Ÿ—“๏ธ Day: Wed, Jun 11
๐Ÿ“Room: 102 B
#CVPR2025 #UNCV2025

11.06.2025 11:33 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
KTH | Postdoc in robotics with specialization in visual domain adaptation KTH jobs is where you search for jobs at www.kth.se.

KTH is looking for a *Postdoc* to work on visual domain adaptation for mobile robot perception in a joint project with Ericsson in Stockholm.

Apply by May 15 if you are interested in working with computer vision applied to real robots!

More info: www.kth.se/lediga-jobb/...

23.04.2025 10:08 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
UNCV Workshop @ CVPR 2025 CVPR 2025 Workshop on Uncertainty Quantification for Computer Vision.

Submission deadline is extended to March 20 for submitting your paper to our #CVPR2025 workshop on Uncertainty Quantification for Computer Vision.

Looking forward to see your submissions on recognizing failure scenarios and enabling robust vision systems!

More info: uncertainty-cv.github.io/2025/

17.03.2025 17:29 โ€” ๐Ÿ‘ 11    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

There is still time to submit your papers to our #CVPR2025 workshop on Uncertainty Quantification for Computer Vision, which is part of the workshop lineup at the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tennessee.

08.03.2025 15:08 โ€” ๐Ÿ‘ 13    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Post image Post image Post image

Our Workshop on Uncertainty Quantification for Computer Vision goes to @cvprconference.bsky.social this year!
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025

โฒ๏ธ Submission deadline: 14 March
๐Ÿ’ป Page: uncertainty-cv.github.io/2025/

28.02.2025 07:28 โ€” ๐Ÿ‘ 33    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I will present โœŒ๏ธ BDU workshop papers @ NeurIPS: one by Rui Li (looking for internships) and one by Anton Baumann.

๐Ÿ”— to extended versions:

1. ๐Ÿ™‹ "How can we make predictions in BDL efficiently?" ๐Ÿ‘‰ arxiv.org/abs/2411.18425

2. ๐Ÿ™‹ "How can we do prob. active learning in VLMs" ๐Ÿ‘‰ arxiv.org/abs/2412.06014

10.12.2024 15:18 โ€” ๐Ÿ‘ 18    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@marcusklasson is following 20 prominent accounts