The two most popular 3D Gaussian Splatting repositories on GitHub! Okay, there's no way to reach first place. :)
16.03.2025 19:13 β π 1 π 0 π¬ 0 π 0@mrnerf.bsky.social
The two most popular 3D Gaussian Splatting repositories on GitHub! Okay, there's no way to reach first place. :)
16.03.2025 19:13 β π 1 π 0 π¬ 0 π 0Paper: arxiv.org/abs/2412.03844
Project: gujiaqivadin.github.io/hybridgs/
- Our HybridGS achieves state-of-the-art performance in benchmark datasets, outperforming previous methods and setting a new standard for novel view synthesis in scenes with transients.
06.12.2024 07:58 β π 1 π 0 π¬ 1 π 0- We develop a multi-view supervision scheme for 3DGS that utilizes overlapping regions across multiple views. This enhances the modelβs capability to distinguish between static and transient elements, ultimately improving the overall quality of the novel view synthesis.
06.12.2024 07:58 β π 0 π 0 π¬ 1 π 0**Contributions:**
- We are the first to introduce a novel hybrid representation that combines image-specific 2D Gaussians with static 3D Gaussians, enabling effective modeling of transient objects within casually captured images.
**HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting**
06.12.2024 07:58 β π 1 π 0 π¬ 1 π 0Paper: arxiv.org/abs/2412.04469
Project: research.nvidia.com/labs/amri/p...
β’ On various challenging real-world dynamic scenes, we surpass existing state-of-the-art approaches on all metrics: reconstruction quality, memory utilization, and training and rendering speed.
06.12.2024 07:43 β π 0 π 0 π¬ 1 π 0β’ We introduce a learned quantization-sparsity framework for compressing per-frame residuals, initializing and training it efficiently using viewspace gradient differences that separate dynamic and static scene content.
06.12.2024 07:43 β π 0 π 0 π¬ 1 π 0Contributions:
β’ We propose a Gaussian residual-based framework to model 3D dynamic scenes for online FVV without any structural constraints. This allows free learning of all 3D-GS attribute residuals, resulting in higher model expressiveness.
QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos
06.12.2024 07:43 β π 1 π 0 π¬ 1 π 0Paper: arxiv.org/abs/2412.04459
06.12.2024 07:34 β π 0 π 0 π¬ 0 π 0Additionally, our neural-free sparse voxels are seamlessly compatible with grid-based 3D processing algorithms.
We achieve promising mesh reconstruction accuracy by integrating TSDF-Fusion and Marching Cubes into our sparse grid system.
Our method improves the previous neural-free voxel grid representation by over 4db PSNR and more than 10x rendering FPS speedup, achieving state-of-the-art comparable novel-view synthesis results.
06.12.2024 07:34 β π 0 π 0 π¬ 1 π 0This avoids the well-known popping artifact found in Gaussian splatting. Second, we adaptively fit sparse voxels to different levels of detail within scenes, faithfully reproducing scene details while achieving high rendering frame rates.
06.12.2024 07:34 β π 0 π 0 π¬ 1 π 0There are two key contributions coupled with the proposed system. The first is to render sparse voxels in the correct depth order along pixel rays by using dynamic Morton ordering.
06.12.2024 07:34 β π 0 π 0 π¬ 1 π 0Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering
Abstract:
We propose an efficient radiance field rendering algorithm that incorporates a rasterization process on sparse voxels without neural networks or 3D Gaussians.
Paper: arxiv.org/abs/2412.04457
Project: lynl7130.github.io/MonoDyGauBe...
Empirically, we find that their rank order is well-defined in synthetic data, but the complexity of real-world data currently overwhelms the differences. Furthermore, the fast rendering speed of all Gaussian-based methods comes at the cost of brittleness in optimization.
06.12.2024 07:29 β π 0 π 0 π¬ 1 π 0We use multiple existing datasets and a new instructive synthetic dataset designed to isolate factors that affect reconstruction quality. We systematically categorize Gaussian splatting methods into specific motion representation types and quantify how their differences impact performance.
06.12.2024 07:29 β π 0 π 0 π¬ 1 π 0In this work, we organize, benchmark, and analyze many Gaussian-splatting-based methods, providing apples-to-apples comparisons that prior works have lacked.
06.12.2024 07:29 β π 0 π 0 π¬ 1 π 0Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps
Abstract (excerpt):
The fast pace of work in this area has produced multiple simultaneous papers that claim to work best, which cannot all be true.
Hot take: Pose estimation methods should generally be evaluated using a baseline training method, such as 3D Gaussian Splatting, to assess novel-view synthesis quality.
05.12.2024 13:30 β π 2 π 0 π¬ 0 π 0