Chih-Hao Lin's Avatar

Chih-Hao Lin

@chih-hao.bsky.social

https://chih-hao-lin.github.io/

53 Followers  |  43 Following  |  22 Posts  |  Joined: 18.01.2025  |  2.1958

Latest posts by chih-hao.bsky.social on Bluesky

We’re presenting WeatherWeaver at #ICCV2025, Poster Session 3 (Oct 22, Wed, 10:45–12:45)!
Come visit #337 and see how we make it snow in Hawaii πŸοΈβ„οΈβ›„

22.10.2025 10:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Finally, meet your #3DV2026 Publicity Chairs! πŸ“’
@hanwenjiang1 @yanxg.bsky.social @chih-hao.bsky.social @csprofkgd.bsky.social

We’ll keep the 3DV conversation alive: posting updates, refreshing the website, and listening to your feedback.

Got questions or ideas? Tag @3dvconf.bsky.social anytime!

31.07.2025 16:54 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Introducing your #3DV2026 πŸ“Publication Chairs &πŸ”Research Interaction Chairs!

πŸ“Publication Chairs ensure accepted papers are properly published in the conference proceedings

πŸ”Research Interaction Chairs encourage engagement by spotlighting exceptional research in 3D vision

24.07.2025 02:36 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Understanding and reconstructing the 3D world are at the heart of computer vision and graphics. At #CVPR2025, we’ve seen many exciting works in 3D vision.
If you're pushing the boundaries, please consider submitting your work to #3DV2026 in Vancouver! (Deadline: Aug. 18, 2025)

01.07.2025 02:07 β€” πŸ‘ 17    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0

This project was part of my internship at Meta, and it was a great collaboration with Jia-Bin, Zhengqin, Zhao, Christian, Tuotuo, Michael, Johannes, Shenlong, and Changil πŸ™Œ

10.06.2025 02:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“ Meet us at #CVPR2025
πŸ—“οΈ June 13 (Fri.), 10:30–12:30
πŸͺ§ Come by our poster session to chat about IRIS and any research ideas
Looking forward to reconnecting β€” and meeting new friends β€” in Nashville! 🎸✨

10.06.2025 02:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Please check out our paper, code, and more demo videos!
🌐 Project page: irisldr.github.io
πŸ’» GitHub: github.com/facebookrese...
πŸ“ Paper: arxiv.org/abs/2401.12977

10.06.2025 02:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Excited to share our work at #CVPR2025!
πŸ‘οΈIRIS estimates accurate surface material, spatially-varying HDR lighting, and camera response function given a set of LDR images! It enables realistic, view-consistent, and controllable relighting and object insertion.
(links in 🧡)

10.06.2025 02:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I’m thrilled to share that I will be joining Johns Hopkins University’s Department of Computer Science (@jhucompsci.bsky.social, @hopkinsdsai.bsky.social) as an Assistant Professor this fall.

02.06.2025 19:46 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 2
Photo of Vancouver

Photo of Vancouver

πŸ“’ 3DV 2026 – Call for Papers is Out!

πŸ“ Paper Deadline: Aug 18
πŸŽ₯ Supplementary: Aug 21
πŸ”— 3dvconf.github.io/2026/call-fo...

πŸ“… Conference Date: Mar 20–23, 2026
πŸŒ† Location: Vancouver πŸ‡¨πŸ‡¦

πŸš€ Showcase your latest research to the world!
#3DV2026 #CallForPapers #Vancouver #Canada

29.05.2025 17:11 β€” πŸ‘ 9    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

πŸ”Š New NVIDIA paper: Audio-SDS πŸ”Š
We repurpose Score Distillation Sampling (SDS) for audio, turning any pretrained audio diffusion model into a tool for diverse tasks, including source separation, impact synthesis & more.

🎧 Demos, audio examples, paper: research.nvidia.com/labs/toronto...

🧡below

09.05.2025 16:06 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
WeatherWeaver Controllable Weather Simulation and Removal with Video Diffusion Models

This work is a great collaboration at NVIDIAAI by Chih-Hao Lin, Zian Wang, Ruofan Liang, Yuxuan Zhang, Sanja Fidler, Shenlong Wang, Zan Gojcic

🌐Please check out our project page: research.nvidia.com/labs/toronto...

02.05.2025 14:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

The weather removal model successfully removes both transient (e.g., rain, snowflake) and persistent effects (e.g., puddle, snow cover), and can even restore sunny-day lighting from rainy/snowy videos.

02.05.2025 14:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

WeatherWeaver combines two video diffusion models. The weather synthesis model generates realistic, temporally consistent weather, adapting shading naturally while preserving the original scene structure.

02.05.2025 14:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

By combining and adjusting multiple weather effects, WeatherWeaver can simulate complex weather transitions, e.g. on 🌧️ rainy and β˜ƒοΈ snowy days β€” without costly real-world acquisitions.

02.05.2025 14:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

WeatherWeaver enables precise control of the weather effects by changing the intensity of the corresponding effects. 🌀️➑️πŸŒ₯️

02.05.2025 14:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

We train a video diffusion model to edit weather effects with precise control, using a novel data strategy combining synthetic videos, generative image editing, and auto-labeled real-world videos.

02.05.2025 14:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Realistic, controllable weather simulation opens new possibilities in πŸš— autonomous driving simulation and 🎬 filmmaking. Physics-based simulation requires accurate geometry and doesn’t scale to in-the-wild videos, while existing video editing often lacks realism and control.

02.05.2025 14:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

What if you could control the weather in any video β€” just like applying a filter?
Meet WeatherWeaver, a video model for controllable synthesis and removal of diverse weather effects β€” such as 🌧️ rain, β˜ƒοΈ snow, 🌁 fog, and ☁️ clouds β€” for any input video.

02.05.2025 14:19 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Post image

[1/10] Is scene understanding solved?

Models today can label pixels and detect objects with high accuracy. But does that mean they truly understand scenes?

Super excited to share our new paper and a new task in computer vision: Visual Jenga!

πŸ“„ arxiv.org/abs/2503.21770
πŸ”— visualjenga.github.io

29.03.2025 19:36 β€” πŸ‘ 58    πŸ” 14    πŸ’¬ 7    πŸ“Œ 1

Check out our cool demos. The code is also open-source!
Project website: haoyuhsu.github.io/autovfx-webs...
Code (GitHub): github.com/haoyuhsu/aut...
Paper: arxiv.org/abs/2411.02394

22.03.2025 03:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

🎬Imagine creating professional visual effects (VFX) with just words! We are excited to introduce AutoVFX, a framework that creates realistic video effects from natural language instructions!

This is a cool project led by Hao-Yu, and we will present it at #3DV 2025!

22.03.2025 03:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Can we create realistic renderings of urban scenes from a single video while enabling controllable editing: relighting, object compositing, and nighttime simulation?

Check out our #3DV2025 UrbanIR paper, led by @chih-hao.bsky.social that does exactly this.

πŸ”—: urbaninverserendering.github.io

16.03.2025 03:39 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Check out UrbanIR - Inverse rendering of unbounded scenes from a single video!

It’s a super cool project led by the amazing Chih-Hao!

@chih-hao.bsky.social is a rising star in 3DV! Follow him!

Learn more hereπŸ‘‡

15.03.2025 13:49 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

πŸ™ Huge thanks to our amazing collaborators from UIUC & UMD: Bohan, Yi-Ting, Kuan-Sheng, David, Jia-Bin (@jbhuang0604.bsky.social) , Anand (@anandbhattad.bsky.social) , and Shenlong. This work wouldn't be possible without you all.

15.03.2025 06:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“’ Meet us at 3DV 2025 in Singapore!
We’re excited to present UrbanIR at 3DV 2025 @3dvconf.bsky.social , come chat with us to discuss future directions!

15.03.2025 06:30 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Check out our interactive demos, and the code is open-source!
Project website: urbaninverserendering.github.io
Code (GitHub): github.com/chih-hao-lin...
Paper: arxiv.org/abs/2306.09349

15.03.2025 06:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

UrbanIR precisely controls the lighting, simulating different times of day without time-consuming tripod captures.

15.03.2025 06:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

With estimated scene properties, UrbanIR integrates a physically-based shading model into neural field, rendering realistic videos from novel viewpoints and lighting conditions.

15.03.2025 06:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

πŸ” How does it work?
UrbanIR reconstructs scene propertiesβ€”geometry, albedo, and shadingβ€”through inverse rendering. Since this is a highly ill-posed problem, we leverage 2D priors to guide the optimization.

15.03.2025 06:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@chih-hao is following 20 prominent accounts