Sergio Izquierdo's Avatar

Sergio Izquierdo

@sizquierdo.bsky.social

PhD candidate at University of Zaragoza. Previously intern at Niantic Labs and Skydio. Working on 3D reconstruction and Deep Learning. serizba.github.io

52 Followers  |  66 Following  |  11 Posts  |  Joined: 06.12.2024  |  2.3709

Latest posts by sizquierdo.bsky.social on Bluesky

Civil Software Licenses

One concern that I have as an AI researcher when publishing code is that it can potentially be used in dual-use applications.
To solve this, we propose Civil Software Licenses. They prevent dual-use while being minimal in the restrictions they impose:

civil-software-licenses.github.io

31.07.2025 17:36 — 👍 16    🔁 3    💬 3    📌 0

Presenting today at #CVPR poster 81.

Code is available at github.com/nianticlabs/...

Want to try it on an iPhone video? On Android? On any other sequence you have? We got you covered. Check the repo.

14.06.2025 14:25 — 👍 4    🔁 0    💬 0    📌 0

Presenting it now at #CVPR

14.06.2025 14:24 — 👍 4    🔁 0    💬 0    📌 0

Happy to be one of them

15.05.2025 10:45 — 👍 2    🔁 0    💬 0    📌 0

We focused on depth from videos and as you pointed we didn't train on datasets with different captures per scene.

31.03.2025 15:51 — 👍 0    🔁 0    💬 1    📌 0
Preview
MVSAnywhere: Zero-Shot Multi-View Stereo MVSAnywhere: Zero-Shot Multi-View Stereo, CVPR 2025

Check the website: nianticlabs.github.io/mvsanywhere/
And the paper: arxiv.org/pdf/2503.22430
Code coming soon!

Great work with @mohamedsayed.bsky.social @mdfirman.bsky.social @guiggh.bsky.social D. Turmukhambetov @jcivera.bsky.social @oisinmacaodha.bsky.social @gbrostow.bsky.social J. Watson

31.03.2025 12:52 — 👍 3    🔁 0    💬 0    📌 0
Video thumbnail

💡Use case:

We show how the accurate and robust depths from MVSAnywhere serve to regularize gaussian splats, obtaining much cleaner scene reconstructions.

As MVSAnywhere is agnostic to the scene scale, this is plug-and-play for your splats!

31.03.2025 12:52 — 👍 3    🔁 0    💬 1    📌 0
Quantitative results of MVSAnywhere

Quantitative results of MVSAnywhere

🏆Results:

MVSAnywhere achieves state-of-the-art results on the Robust Multi-View Depth Benchmark, showing its strong generalization performance.

31.03.2025 12:52 — 👍 4    🔁 0    💬 1    📌 0
Video thumbnail

🧩Challenge: Varying Depth Scales & Unknown Ranges

🔹Most models require a known depth range to estimate the cost volume.
✅MVSAnywhere estimates an initial range based on camera scale and setup and refines it. It predicts at the same scale as the input cameras!

31.03.2025 12:52 — 👍 2    🔁 0    💬 1    📌 0
Qualitative results of mvsanywhere

Qualitative results of mvsanywhere

🧩Challenge: Domain Generalization

🔹Previous models struggle across different domains ( indoor🏠 vs outdoor🏞️).
✅MVSAnywhere uses a transformer architecture and is trained on a large array of varied synthetic datasets

31.03.2025 12:52 — 👍 3    🔁 0    💬 1    📌 0
MVSAnywhere works with dynamic objects and casually captured videos.

MVSAnywhere works with dynamic objects and casually captured videos.

🧩Challenge: Robustness to casually captured videos

🔹MVS methods completely rely on the matches of the cost volume (not working for low overlap & dynamic)
✅MVSAnywhere successfully combines strong single-view image priors with multi-view information from our cost volume

31.03.2025 12:52 — 👍 3    🔁 0    💬 1    📌 0
Video thumbnail

🔍Looking for a multi-view depth method that just works?

We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.

More info:
nianticlabs.github.io/mvsanywhere/

31.03.2025 12:52 — 👍 40    🔁 10    💬 2    📌 4

MASt3R-SLAM code release!
github.com/rmurai0610/M...

Try it out on videos or with a live camera

Work with
@ericdexheimer.bsky.social*,
@ajdavison.bsky.social (*Equal Contribution)

25.02.2025 17:23 — 👍 51    🔁 10    💬 2    📌 3
Post image Post image Post image

MegaLoc: One Retrieval to Place Them All
@berton-gabri.bsky.social Carlo Masone

tl;dr: DINOv2-SALAD, trained on all available VPR datasets works very well.
Code should at github.com/gmberton/Meg..., but not yet
arxiv.org/abs/2502.17237

25.02.2025 10:03 — 👍 13    🔁 3    💬 1    📌 0

@sizquierdo is following 20 prominent accounts