One thing about humanoid robots i think is under appreciated, is that you dont have to make some super crazy training set for some specialized arm on wheels. It can learn from humans performing actions, so this is incredibly useful
01.08.2025 15:12 โ ๐ 39 ๐ 5 ๐ฌ 3 ๐ 0
Video recordings from our workshop on Embodied Intelligence and tutorial on Robotics 101 @cvprconference.bsky.social are now up, just in time to catch up with things over the summer.
Enjoy! #CVPR2025
16.07.2025 13:24 โ ๐ 8 ๐ 3 ๐ฌ 0 ๐ 0
VGGT for the masses ๐ค! #cvpr2025
14.06.2025 06:25 โ ๐ 26 ๐ 3 ๐ฌ 0 ๐ 0
SGP 2025 - Submit page
The Symposium on Geometry Processing is an amazing venue for geometry research: meshes, point clouds, neural fields, 3D ML, etc. Reviews are quick and high-quality.
The deadline is in ~10 days. Consider submitting your work, I'm planning to submit!
sgp2025.my.canva.site/submit-page-...
01.04.2025 18:42 โ ๐ 43 ๐ 10 ๐ฌ 0 ๐ 0
๐ข We present CWGrasp, a framework for generating 3D Whole-body Grasps with Directional Controllability ๐
Specifically:
๐ given a grasping object (shown in red color) placed on a receptacle (brown color)
๐ we aim to generate a body (gray color) that grasps the object.
๐งต 1/10
14.03.2025 18:35 โ ๐ 10 ๐ 1 ๐ฌ 1 ๐ 1
๐ข๐ข๐ข Submit to our workshop on Physics-inspired 3D Vision and Imaging at #CVPR2025!
Speakers ๐ฃ๏ธ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!
๐ pi3dvi.github.io
You can also just come hangout with us at the workshop @cvprconference.bsky.social!
13.03.2025 18:47 โ ๐ 10 ๐ 3 ๐ฌ 0 ๐ 0
I will not lie: having the supp mat DL on the same day as the main paper DL (as ICLR and NeurIPS always did, of course) does not have the best impact on the stress component of the paper submission crunch.
07.03.2025 21:34 โ ๐ 9 ๐ 2 ๐ฌ 1 ๐ 0
โFlowโ wins best animated feature film Oscar
LOS ANGELES, March 2 (Reuters) - The independent film โFlowโ won the best animated feature film Oscar on Sunday, securing the first Academy Award for Latvia and its Latvian director Gints Zilbalodis.
A huge congrats to Flow for winning the Oscar for Best Animated Feature! It was made by a tiny crew entirely using Blender and rendered entirely using Eevee. IMO everyone in the wider animation industry has lessons to learn from Flow.
www.reuters.com/lifestyle/fl...
03.03.2025 04:46 โ ๐ 38 ๐ 3 ๐ฌ 0 ๐ 1
Paper: arxiv.org/pdf/2311.16042
Code: github.com/janehwu/clot...
28.02.2025 20:40 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
This project started as a cold email back in 2020, and from it came a wonderful new collaboration and immense personal growth. It's not everyday that my research requires writing CUDA kernels..
Thank you to Diego Thomas (who will also be at WACV) and Ron Fedkiw for guiding me every step of the way!
28.02.2025 20:40 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Our method is able to reconstruct a unified human mesh from in-the-wild images, where high-frequency details like cloth wrinkles can be recovered even in the absence of any ground truth 3D data.
28.02.2025 20:40 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
In this paper, we introduce a low-cost, optimization-based method for 3D human reconstruction guided by inferred 2D normal maps.
Aiming for end-to-end differentiability, we derive analytical gradients to backpropagate from predicted normal maps to network-inferred SDF values on a tetrahedral mesh.
28.02.2025 20:40 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
It all started with a question that can be best characterized as โborn out of resource scarcityโ: can we reconstruct humans from consumer-grade cameras without using *any* 3D training data? ๐ซ
(Half a PhD later) Yes, we can! ๐ฎโ๐จ
28.02.2025 20:40 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
I'll be presenting "Sparse-View 3D Reconstruction of Clothed Humans via Normal Maps" tomorrow morning at #WACV2025 Oral Session 1.1. Excited to share the final project of my PhD! A brief story ๐งต
28.02.2025 20:40 โ ๐ 8 ๐ 0 ๐ฌ 2 ๐ 0
Our method is able to reconstruct a unified human mesh from in-the-wild images, where high-frequency details like cloth wrinkles can be recovered even in the absence of any ground truth 3D data.
28.02.2025 20:35 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
In this paper, we introduce a low-cost, optimization-based method for 3D human reconstruction guided by inferred 2D normal maps.
Aiming for end-to-end differentiability, we derive analytical gradients to backpropagate from predicted normal maps to network-inferred SDF values on a tetrahedral mesh.
28.02.2025 20:35 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
It all started with a question that can be best characterized as โborn out of resource scarcityโ: can we reconstruct humans from consumer-grade cameras without using *any* 3D training data? ๐ซ
(Half a PhD later) Yes, we can! ๐ฎโ๐จ
28.02.2025 20:35 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
What happens when vision๐ค robotics meet? ๐จ Happy to share our new work on Pretraining Robotic Foundational Models!๐ฅ
ARM4R is an Autoregressive Robotic Model that leverages low-level 4D Representations learned from human video data to yield a better robotic model.
BerkeleyAI ๐
24.02.2025 03:49 โ ๐ 16 ๐ 5 ๐ฌ 1 ๐ 0
Full quality video here: www.youtube.com/watch?v=uVcB...
21.02.2025 20:06 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 0
GPUDrive got accepted to ICLR 2025!
With that, we release GPUDrive v0.4.0! ๐จ You can now install the repo and run your first fast PPO experiment in under 10 minutes.
Iโm honestly so excited about the new opportunities and research the sim makes possible. ๐ 1/2
20.02.2025 18:53 โ ๐ 45 ๐ 4 ๐ฌ 2 ๐ 1
Just found a new winner for the most hype-baiting, unscientific plot I have seen. (From the recent Figure AI release)
20.02.2025 22:01 โ ๐ 37 ๐ 6 ๐ฌ 1 ๐ 1
Really excited to put together this #CVPR2025 workshop on "4D Vision: Modeling the Dynamic World" -- one of the most fascinating areas in computer vision today!
We've invited incredible researchers who are leading fantastic work at various related fields.
4dvisionworkshop.github.io
12.02.2025 10:34 โ ๐ 23 ๐ 3 ๐ฌ 1 ๐ 3
MULA 2025
Eighth Multimodal Learning and Applications Workshop
Paper submission is now open for the 8th Multimodal Learning and Applications Workshop at #CVPR2025!
Call For Papers: mula-workshop.github.io
#computervision #cvpr #multimodal #ai
11.02.2025 22:06 โ ๐ 6 ๐ 1 ๐ฌ 0 ๐ 0
EgoVis 2023/2024 Distinguished Paper Awards
EgoVis
๐
Call for Nominations EgoVis 2023/2024 Distinguished Paper Awards
Did you publish a paper contributing to Ego Vision in 2023 or 2024?
Innovative &advancing Ego Vision?
Worthy of a prize?
DL 1 April 2025
Decisions
@cvprconference.bsky.social
#CVPR2025
egovis.github.io/awards/2023_...
11.02.2025 11:29 โ ๐ 9 ๐ 4 ๐ฌ 0 ๐ 2
(1/n)
๐ข๐ข ๐๐๐๐๐๐ฆ๐๐ฅ๐ ๐ฏ๐ ๐๐๐ญ๐๐ฌ๐๐ญ ๐๐๐ฅ๐๐๐ฌ๐ ๐ข๐ข
Head captures of 7.1MP from 16 cameras at 73fps:
* More recordings (425 people)
* Better color calibration
* Convenient download scripts
github.com/tobias-kirsc...
11.02.2025 15:05 โ ๐ 15 ๐ 8 ๐ฌ 1 ๐ 0
Announcing Diffusion Forcing Transformer (DFoT), our new video diffusion algorithm that generates ultra-long videos of 800+ frames. DFoT enables History Guidance, a simple add-on to any existing video diffusion models for a quality boost. Website: boyuan.space/history-guidance (1/7)
11.02.2025 20:37 โ ๐ 35 ๐ 6 ๐ฌ 1 ๐ 0
We can usually only get partial observations of scenes, but getting complete object information could be helpful for many tasks in robotics and graphics. Our new ICLR 2025 paper extends point-based single object completion models to completing multiple objects in a scene, (1/3)๐งต
11.02.2025 06:59 โ ๐ 9 ๐ 2 ๐ฌ 1 ๐ 0
๐๐ข
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N
07.02.2025 11:45 โ ๐ 33 ๐ 6 ๐ฌ 2 ๐ 4
DexGen
Seeing some of the early results from DexterityGen were definitely a wow moment for me!
It doesn't take a lot to realize all the new opportunities a strong teleop system like this enables! ๐
X thread: x.com/zhaohengyin/...
Link: zhaohengyin.github.io/dexteritygen/
08.02.2025 03:02 โ ๐ 2 ๐ 1 ๐ฌ 0 ๐ 0
Our new work has made a big leap moving away from depth based end-to-end to raw rgb pixels based end-to-end. We have two versions: mono and stereo, all trained entirely in simulation (IsaacLab).
10.02.2025 04:59 โ ๐ 21 ๐ 2 ๐ฌ 1 ๐ 1
ML Eng. and econometrics. Lot more left-posting than normal. Some hobby-level finance
Regrettably degen trading for the next 3 months, im sorry
Views dont reflect my employer
CEO of Bluesky, steward of AT Protocol.
dec/acc ๐ฑ ๐ชด ๐ณ
Research Director at CNR, head of Visual Computing Lab
Messing with small triangles for ages.
Guilty of providing the community with many buggy, but hopefully useful, tools
Official account for OpenDriveLab at HKU and Beyond. We do cutting-edge research in Robotics, Autonomous Driving.
Webpage: opendrivelab.com
Email: contact@opendrivelab.com
Marrying classical CV and Deep Learning. I do things, which work, rather than being novel, but not working.
http://dmytro.ai
PhD at Tรผbingen. Working on post-training diffusion and multimodal models. Previous research interns at Snapchat and Naver Labs.
https://sgk98.github.io/
PhD - R&D Engineer at French Mapping Agency @ignfrance.bsky.social working on Deep Learning and computer Vision for Earth observation
Making robots part of our everyday lives. #AI research for #robotics. #computervision #machinelearning #deeplearning #NLProc #HRI Based in Grenoble, France. NAVER LABS R&D
europe.naverlabs.com
PhD student in Machine Learning @Warsaw University of Technology and @IDEAS NCBR
2nd Year PhD Student from Imagine-ENPC/IGN/CNES
Working on Self-supervised Cross-modal Geospatial Learning.
Personal WebPage: https://gastruc.github.io/
PhD student at IMAGINE (ENPC) and GeoVic (Ecole Polytechnique). Working on image generation.
http://nicolas-dufour.github.io
Perception Researcher at Ericsson, Sweden.
https://marcusklasson.github.io/
PhD student at UT Austin working on program synthesis. Visiting student at Caltech.
Reader in Computer Vision and Machine Learning @ School of Informatics, University of Edinburgh.
https://homepages.inf.ed.ac.uk/omacaod
PhD Student at IMAGINE (ENPC)
Working on camera pose estimation
thibautloiseau.github.io
Research Scientist at Yahoo! / ML OSS developer
PhD in Computer Science at UC Irvine
Research: ML, NLP, Computer Vision, Information Retrieval
Technical Chair: #CVPR2026 #ICCV2025 #WACV2026
Open Source/Science matters!
https://yoshitomo-matsubara.net
trying to build reliable models from unreliable data
postdoc @ Cornell Tech, phd @ MIT
dmshanmugam.github.io