Florian Hahlbohm

Florian Hahlbohm

@fhahlbohm.bsky.social

PhD student, Computer Graphics Lab, TU Braunschweig. Radiance Fields and Point Rendering. Webpage: https://fhahlbohm.github.io/

82 Followers 78 Following 19 Posts Joined Nov 2024
1 day ago

But if your PhD took place in an environment that nurtured genuine curiosity, where you drove your own research, defended your ideas through debate, and developed the ability to ask meaningful questions, then the notion of LLMs conducting research probably doesn't sit right with you.

7 2 0 0
2 weeks ago

SIGBOVIK has a Bluesky now! Follow to learn more cutting-edge research from the world’s most comedic and occasionally scientific academic conference

64 25 0 2
11 months ago
Post image

Had the honor to present "Gaussians-to-Life" at #3DV2025 yesterday. In this work, we used video diffusion models to animate arbitrary 3D Gaussian Splatting scenes.
This work was a great collaboration with @moechsle.bsky.social, @miniemeyer.bsky.social, and Federico Tombari.

πŸ§΅β¬‡οΈ

13 1 2 1
11 months ago
Post image Post image

Had a great experience presenting our work on 3D scene reconstruction from a single image with @visionbernie.bsky.social at #3DV2025 πŸ‡ΈπŸ‡¬

andreeadogaru.github.io/Gen3DSR

Reach out if you're interested in discussing our research or exploring international postdoc opportunities @fau.de

18 4 0 1
11 months ago
Video thumbnail

Here is our gaussian splat editor: github.com/m-schuetz/Sp...

Eventually I want it to be able to take scans of ugly streets and beautify them like a photoshop for gaussians. :)

14 3 0 0
1 year ago

"DaD's a pretty good keypoint detector, probably the best." Nice one πŸ˜‚

4 0 1 0
1 year ago

We also provide a multitude of data loaders, camera model implementations, as well as various utilities for optimization and visualization.

2 0 0 0
1 year ago

Each method has a Trainer, Model, and Renderer class that extend the respective base classes. Many of the current methods also define custom CUDA extensions or a designated loss class.

1 0 1 0
1 year ago

NeRFICG is a research-focused framework for developing novel view synthesis methods. Shoutout to my colleague Moritz Kappel, who is responsible for most of the underlying architecture! We think, NeRFICG is a decent starting point for any PyTorch-based graphics/vision project.

1 0 1 0
1 year ago
Preview
NeRFICG A flexible Pytorch framework for simple and efficient implementation of neural radiance fields and rasterization-based view synthesis methods. - NeRFICG

Further discussion and ideas for where things could be improved can be found in our paper and the "Additional Notes" in our GitHub repository.

The remainder is on our framework NeRFICG: github.com/nerficg-proj...

1 0 1 0
1 year ago
Video thumbnail

BlueSky did not let me have two videos in the same post. So here's the OIT video.

1 0 1 0
1 year ago
Video thumbnail

An interesting observation we had is that OIT (enabled by setting "Blend Mode" to 3 in the config) seems to help background reconstruction and overall densification. Videos show the first 3K training iterations using hybrid vs. order-independent transparency.

1 0 1 0
1 year ago

Note that the GUI has a non-negligible impact on frame rate as it is Python-based. So you won't see maximum performance even after turning off v-sync. It is also Linux-only but my colleague Timon Scholz recently started working on a C++ version that also supports Windows.

2 0 1 0
1 year ago

Btw, all visualizations in this thread use our perspective-correct approach for rendering 3D Gaussians. It is based on ray-casting and can be implemented efficiently. However the high frame rates reported in our paper are due to the hybrid transparency approach.

1 0 1 0
1 year ago
Post image Post image Post image Post image

Here are examples using (0) hybrid transparency with K=16, (1) alpha blending of the 4 first fragments per-pixel, (2) alpha blending in "global" depth-ordering, and (3) order-independent transparency. Model was trained using the settings as in (0).

2 0 1 0
1 year ago

You can also modify the "Blend Mode" (see the Readme on GitHub) and core size K for blending modes where this is applicable. To reduce compile times, we only compile kernels for K in [1, 2, 4, 8, 16, 32] and "round down" for other values (e.g., 12 -> 8).

2 0 1 0
1 year ago
Post image

Via the "Viewer Config" (F3), you can switch to rendering depth maps and expanding the advanced renderer config allows you to switch between expected (shown here) and median depth.

2 0 1 0
1 year ago
Preview
GitHub - MoritzKappel/D-NPC: Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video". Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video". - MoritzKappel/D-NPC

Don't get confused by the "Time" stuff, which is for dynamic scenes reconstructed by methods such as our recent D-NPC: github.com/MoritzKappel...

HTGS also does currently not support changing the background color and using camera models other than "Perspective" without distortion.

2 0 1 0
1 year ago
Post image Post image

By modifying the "Principal Point" and/or "Focal Length" you can create fun images like the one below. You can even do this while watching your Gaussians train if you set TRAINING.GUI.ACTIVATE to true in the config file.

And yes, you could in theory train on images like this.

2 0 1 0
1 year ago

Let's start with the GUI features you might want to try with HTGS. If you open the "Camera Config" panel (F4) you can switch between "Orbital" and "Walking" controls. You can also modify the near/far plane.

2 0 1 0
1 year ago

Many thanks to my co-authors Fabian Friederichs, @timweyrich.bsky.social, @linusfranke.bsky.social, Moritz Kappel, Susana Castillo and @mcstammi.bsky.social, Martin Eisemann, and Marcus Magnor!

Thoughts and things to try in the thread below:

3 1 1 0
1 year ago
Video thumbnail

We recently released the code for "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency"

Project Page: fhahlbohm.github.io/htgs/
Code: github.com/nerficg-proj...

20 8 2 0
1 year ago
A autumnal stump, covered in mushrooms. This is a still from the interactive 3D reconstruction!

I’ve released a new version of my 3D reconstruction tool, Brush πŸ–ŒοΈ It's a big step forward - the quality & speed now match gsplat, and there’s a lot of other new features! See the release notes github.com/ArthurBrusse...

Some of the new features:

26 8 2 0
1 year ago

@chrisoffner3d.bsky.social we are in need of your eval)

5 1 0 0
1 year ago

Merry Christmas :) I tried this as well but with Brush by @arthurperpixel.bsky.social . How many pictures did you take? For me Colmap only ended up using like 25/50 images and it didn't work that well. Tbf lighting was pretty bad.

0 0 1 0
1 year ago
Post image Post image Post image Post image

Volumetrically Consistent 3D Gaussian Rasterization

Chinmay Talegaonkar, Yash Belhe, Ravi Ramamoorthi, Nicholas Antipa

tl;dr: volumetrically integrate 3D Gaussians directly to compute the transmittance across them analytically->physically-accurate alpha values

arxiv.org/abs/2412.03378

4 1 0 0
1 year ago

I really enjoyed watching the videos the last time you did this. Thanks for making them available to everyone :)

2 0 0 0