Had the honor to present "Gaussians-to-Life" at #3DV2025 yesterday. In this work, we used video diffusion models to animate arbitrary 3D Gaussian Splatting scenes.
This work was a great collaboration with @moechsle.bsky.social, @miniemeyer.bsky.social, and Federico Tombari.
π§΅β¬οΈ
28.03.2025 08:35 β π 13 π 1 π¬ 2 π 1
Had a great experience presenting our work on 3D scene reconstruction from a single image with @visionbernie.bsky.social at #3DV2025 πΈπ¬
andreeadogaru.github.io/Gen3DSR
Reach out if you're interested in discussing our research or exploring international postdoc opportunities @fau.de
26.03.2025 02:27 β π 19 π 4 π¬ 0 π 1
Here is our gaussian splat editor: github.com/m-schuetz/Sp...
Eventually I want it to be able to take scans of ugly streets and beautify them like a photoshop for gaussians. :)
21.03.2025 15:48 β π 14 π 3 π¬ 0 π 0
"DaD's a pretty good keypoint detector, probably the best." Nice one π
10.03.2025 07:56 β π 4 π 0 π¬ 1 π 0
We also provide a multitude of data loaders, camera model implementations, as well as various utilities for optimization and visualization.
08.03.2025 11:56 β π 2 π 0 π¬ 0 π 0
Each method has a Trainer, Model, and Renderer class that extend the respective base classes. Many of the current methods also define custom CUDA extensions or a designated loss class.
08.03.2025 11:56 β π 1 π 0 π¬ 1 π 0
NeRFICG is a research-focused framework for developing novel view synthesis methods. Shoutout to my colleague Moritz Kappel, who is responsible for most of the underlying architecture! We think, NeRFICG is a decent starting point for any PyTorch-based graphics/vision project.
08.03.2025 11:56 β π 1 π 0 π¬ 1 π 0
NeRFICG
A flexible Pytorch framework for simple and efficient implementation of neural radiance fields and rasterization-based view synthesis methods. - NeRFICG
Further discussion and ideas for where things could be improved can be found in our paper and the "Additional Notes" in our GitHub repository.
The remainder is on our framework NeRFICG: github.com/nerficg-proj...
08.03.2025 11:56 β π 1 π 0 π¬ 1 π 0
BlueSky did not let me have two videos in the same post. So here's the OIT video.
08.03.2025 11:56 β π 1 π 0 π¬ 1 π 0
An interesting observation we had is that OIT (enabled by setting "Blend Mode" to 3 in the config) seems to help background reconstruction and overall densification. Videos show the first 3K training iterations using hybrid vs. order-independent transparency.
08.03.2025 11:56 β π 1 π 0 π¬ 1 π 0
Note that the GUI has a non-negligible impact on frame rate as it is Python-based. So you won't see maximum performance even after turning off v-sync. It is also Linux-only but my colleague Timon Scholz recently started working on a C++ version that also supports Windows.
08.03.2025 11:56 β π 2 π 0 π¬ 1 π 0
Btw, all visualizations in this thread use our perspective-correct approach for rendering 3D Gaussians. It is based on ray-casting and can be implemented efficiently. However the high frame rates reported in our paper are due to the hybrid transparency approach.
08.03.2025 11:56 β π 1 π 0 π¬ 1 π 0
You can also modify the "Blend Mode" (see the Readme on GitHub) and core size K for blending modes where this is applicable. To reduce compile times, we only compile kernels for K in [1, 2, 4, 8, 16, 32] and "round down" for other values (e.g., 12 -> 8).
08.03.2025 11:56 β π 2 π 0 π¬ 1 π 0
Via the "Viewer Config" (F3), you can switch to rendering depth maps and expanding the advanced renderer config allows you to switch between expected (shown here) and median depth.
08.03.2025 11:56 β π 2 π 0 π¬ 1 π 0
GitHub - MoritzKappel/D-NPC: Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video".
Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video". - MoritzKappel/D-NPC
Don't get confused by the "Time" stuff, which is for dynamic scenes reconstructed by methods such as our recent D-NPC: github.com/MoritzKappel...
HTGS also does currently not support changing the background color and using camera models other than "Perspective" without distortion.
08.03.2025 11:56 β π 2 π 0 π¬ 1 π 0
By modifying the "Principal Point" and/or "Focal Length" you can create fun images like the one below. You can even do this while watching your Gaussians train if you set TRAINING.GUI.ACTIVATE to true in the config file.
And yes, you could in theory train on images like this.
08.03.2025 11:56 β π 2 π 0 π¬ 1 π 0
Let's start with the GUI features you might want to try with HTGS. If you open the "Camera Config" panel (F4) you can switch between "Orbital" and "Walking" controls. You can also modify the near/far plane.
08.03.2025 11:56 β π 2 π 0 π¬ 1 π 0
Many thanks to my co-authors Fabian Friederichs, @timweyrich.bsky.social, @linusfranke.bsky.social, Moritz Kappel, Susana Castillo and @mcstammi.bsky.social, Martin Eisemann, and Marcus Magnor!
Thoughts and things to try in the thread below:
08.03.2025 11:56 β π 3 π 1 π¬ 1 π 0
We recently released the code for "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency"
Project Page: fhahlbohm.github.io/htgs/
Code: github.com/nerficg-proj...
08.03.2025 11:56 β π 20 π 8 π¬ 2 π 0
A autumnal stump, covered in mushrooms. This is a still from the interactive 3D reconstruction!
Iβve released a new version of my 3D reconstruction tool, Brush ποΈ It's a big step forward - the quality & speed now match gsplat, and thereβs a lot of other new features! See the release notes github.com/ArthurBrusse...
Some of the new features:
30.01.2025 16:25 β π 25 π 8 π¬ 2 π 0
@chrisoffner3d.bsky.social we are in need of your eval)
23.01.2025 10:57 β π 5 π 1 π¬ 0 π 0
Merry Christmas :) I tried this as well but with Brush by @arthurperpixel.bsky.social . How many pictures did you take? For me Colmap only ended up using like 25/50 images and it didn't work that well. Tbf lighting was pretty bad.
25.12.2024 23:57 β π 0 π 0 π¬ 1 π 0
I really enjoyed watching the videos the last time you did this. Thanks for making them available to everyone :)
01.12.2024 12:58 β π 2 π 0 π¬ 0 π 0
synth tinkerer (plinky); midjourney; NVResearch (InstantNGP/NeRF); cofounder MediaMolecule (Dreams, LittleBigPlanet); demoscene/vj (statix/bluespoon)
PhD student @ LIX | BX 21 | MVA 23
PhD Candidate @ Stevens Institute of Technology
Software Engineer, Graphics Programmer
PhD student at the Computer Graphics Lab of the TU Braunschweig under the supervision of Prof. Dr.-Ing. Marcus Magnor. My research focuses on Novel View Synthesis and VR Perception.
Prof ETH ZΓΌrich, Director Microsoft Spatial AI Lab, CV/ML/Robotics
PhD Candidate at the Max Planck ETH Center for Learning Systems working on 3D Computer Vision.
https://wimmerth.github.io
Making robots part of our everyday lives. #AI research for #robotics. #computervision #machinelearning #deeplearning #NLProc #HRI Based in Grenoble, France. NAVER LABS R&D
europe.naverlabs.com
PhD Candidate in 3D CV @CogCoVi.bsky.social @FAU.de
Former Intern at RealityLabs, SamsungResearch
andreeadogaru.github.io
Professor for Computer Science at TU Darmstadt, Germany
neural-capture.com
PhD student in Computer Graphics at USI Lugano, Switzerland under Prof. Piotr Didyk. Photographer in my spare time.
https://arcanous98.github.io/
he/him, Professor for Computer Vision in Media Applications, all about 3D graphics and vision, currently dealing with radiance fields and their applications
Researching 3D Reconstruction & Generation | PhD Student University of TΓΌbingen
Ex-NeRF Herder. Researcher at Google DeepMind.
Official account for International Conference on 3D Vision (3DV) #3DV2026 π¨π¦
Website: https://3dvconf.github.io/
I am a Research Scientist at Google Zurich working on 3d vision (https://m-niemeyer.github.io/)
Professor, University of TΓΌbingen @unituebingen.bsky.social.
Head of Department of Computer Science π.
Faculty, TΓΌbingen AI Center π©πͺ @tuebingen-ai.bsky.social.
ELLIS Fellow, Founding Board Member πͺπΊ @ellis.eu.
CV π·, ML π§ , Self-Driving π, NLP πΊ
Graphics researcher at TU Delft. Formerly Intel, KIT, NVIDIA, Uni Bonn. Known for moment shadow maps, MBOIT, blue noise, spectra, light sampling. Opinions are my own.
https://MomentsInGraphics.de