We hope this can provide some insights on how to design diffusion-based NVS methods to improve their consistency and plausibility!
π§©π»ποΈ All code, data, & checkpoints are released!
π Learn more: jason-aplp.github.io/MOVIS/ (6/6)
@yixinchen.bsky.social
Research Scientist at BIGAI, 3D Vision, prev @UCLA, @MPI_IS, @Amazon, https://yixchen.github.io
We hope this can provide some insights on how to design diffusion-based NVS methods to improve their consistency and plausibility!
π§©π»ποΈ All code, data, & checkpoints are released!
π Learn more: jason-aplp.github.io/MOVIS/ (6/6)
π We also visualize the sampling process of:
πΉ Ours (with biased timestep scheduler) β
πΉ Zero123 (without it) β
Our approach shows more precise location prediction in the earlier stage & finer detail refinement in later stages! π―β¨ (5/6)
π‘ Key insight in MOVIS: A biased noise timestep scheduler for diffusion-based novel view synthesizer that prioritizes larger timesteps early in training and gradually decreases them over time. This improves novel view synthesis in multi-object scenes! π―π₯ (4/6)
01.04.2025 01:45 β π 0 π 0 π¬ 1 π 0πWe analyze the sampling process of diffusion-based novel view synthesizers and:
π Larger timesteps β Focus on position & orientation recovery
π Smaller timesteps β Refine geometry & appearance
π We visualize the sampling process below! (3/6)
In MOVIS, we enhance diffusion-based novel view synthesis with:
π Additional structural inputs (depth & mask)
ποΈ Novel-view mask prediction as an auxiliary task
π― A biased noise scheduler to facilitate training
We identify the following key insight: (2/6)
πHow to preserve object consistency in NVS, ensuring correct position, orientation, plausible geometry, and appearance? This is especially critical for image/video generative models and world models.
πCheck out our #CVPR2025 paper: MOVIS (jason-aplp.github.io/MOVIS) π (1/6)
This line highlights our work in reconstruction and scene understandingβincluding SSR (dali-jack.github.io/SSR/), PhyScene (physcene.github.io), PhyRecon(phyrecon.github.io), ArtGS (articulate-gs.github.io), etc.βwith more to come soon!ππ (n/n)
21.03.2025 09:52 β π 0 π 0 π¬ 0 π 0Even more!
Our model generalizes to in-the-wild scenes like YouTube videosπ₯π! Using just *15 input views*, we achieve high-quality reconstructions with detailed geometry & appearance. π Watch the demo to see it in action! π (5/n)
π On datasets like Replica and ScanNet++, our model produces higher-quality reconstructions compared to baselines, including better accuracy in less-captured areas, more precise object structures, smoother backgrounds, and fewer floating artifacts. π (4/n)
21.03.2025 09:51 β π 0 π 0 π¬ 0 π 0π₯β¨ Our method excels in large, heavily occluded scenes, outperforming baselines that require 100 views using just 10. The reconstructed scene supports interactive text-based editing, and its decomposed object meshes enable photorealistic VFX edits.π (3/n)
21.03.2025 09:50 β π 1 π 0 π¬ 0 π 0π οΈ Our method combines decompositional neural reconstruction with diffusion prior, filling in missing information in less observed and occluded regions. The reconstruction (rendering loss) and generative (SDS loss) guidance are balanced by our visibility-guided modeling. (2/n)
21.03.2025 09:48 β π 0 π 0 π¬ 0 π 0π How to reconstruct 3D scenes with decomposed objects from sparse inputs?
Check out DPRecon (dp-recon.github.io) at #CVPR2025 β it recovers all objects, achieves photorealistic mesh rendering, and supports text-based geometry & appearance editing. More detailsπ (1/n)
π’π’π’Excited to announce the 5th Workshop on 3D Scene Understanding for Vision, Graphics, and Robotics at #CVPR2025! Expect our awesome speakers and challenges on multi-modal 3D scene understanding and reasoning. πππ
Learn more at scene-understanding.com.
Checking the digest from scholar-inbox has become my daily routine. A real game-changer!πππ
16.01.2025 02:33 β π 3 π 0 π¬ 0 π 0