Your #ICCV2025 paper got rejected? Give it another try and submit to our proceedings track!
Your #ICCV2025 paper got accepted? Congrats! Give it even more visibility by joining our nectar track.
More info: sites.google.com/view/neuslam...
@dimtzionas.bsky.social
Assistant Professor for 3D Computer Vision at University of Amsterdam. 3D Human-centric Perception & Synthesis: bodies, hands, objects. Past: MPI for Intelligent Systems, Univ. of Bonn, Aristotle Univ. of Thessaloniki Website: https://dtzionas.com
Your #ICCV2025 paper got rejected? Give it another try and submit to our proceedings track!
Your #ICCV2025 paper got accepted? Congrats! Give it even more visibility by joining our nectar track.
More info: sites.google.com/view/neuslam...
Why does 3D human-object reconstruction fail in the wild or get limited to a few object classes? A key missing piece is accurate 3D contact. InteractVLM (#CVPR2025) uses foundational models to infer contact on humans & objects, improving reconstruction from a single image. (1/10)
15.06.2025 12:23 โ ๐ 5 ๐ 2 ๐ฌ 1 ๐ 0๐๐ป๐๐ฒ๐ฟ๐ฎ๐ฐ๐๐ฉ๐๐ : ๐ฏ๐ ๐๐ป๐๐ฒ๐ฟ๐ฎ๐ฐ๐๐ถ๐ผ๐ป ๐ฅ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด ๐ณ๐ฟ๐ผ๐บ ๐ฎ๐ ๐๐ผ๐๐ป๐ฑ๐ฎ๐๐ถ๐ผ๐ป๐ฎ๐น ๐ ๐ผ๐ฑ๐ฒ๐น๐
Sai Kumar Dwivedi, Dimitrije Antiฤ, Shashank Tripathi ... Dimitrios Tzionas
arxiv.org/abs/2504.05303
Trending on www.scholar-inbox.com
If you are at #3DV2025 @3dvconf.bsky.social today, George and Omid will be presenting our work on '3D Whole-Body Grasps with Directional Controllability'. Pass by to discuss human avatars and interactions! ๐
๐ Poster 6-15
๐ Local time 15:30 - 17:00
๐ Website: gpaschalidis.github.io/cwgrasp/
CWGrasp will be presented @3dvconf.bsky.social #3DV2025
Authors: G. Paschalidis, R. Wilschut, D. Antiฤ, O. Taheri, D. Tzionas
Colab: University of Amsterdam, MPI for Intelligent Systems
Project: gpaschalidis.github.io/cwgrasp
Paper: arxiv.org/abs/2408.16770
Code: github.com/gpaschalidis...
๐งต 10/10
๐งฉ Our code is modular - each model has its own repo.
You can easily integrate these into your code & build new research!
๐งฉ CGrasp: github.com/gpaschalidi...
๐งฉ CReach: github.com/gpaschalidi...
๐งฉ ReachingField: github.com/gpaschalidi...
๐งฉ CWGrasp: github.com/gpaschalidi...
๐งต 9/10
โ๏ธ CWGrasp:
๐ requires 500x less samples & runs 10x faster than SotA,
๐ produces grasps that are perceived as more realistic than SotA ~70% of the times,
๐ works well for objects placed at various "heights" from the floor,
๐ generates both right- & left-hand grasps.
๐งต 8/10
๐ We condition both CGrasp & CReach on the same direction.
๐ This produces a hand-only guiding grasp & a reaching body that are already mutually compatible!
๐ฏ Thus, we need to conduct a *small* refinement *only* for the body so that its fingers match the guiding hand!
๐งต 7/10
โ๏ธ CGrasp & CReach - generate a hand-only grasp & reaching body, respectively, with varied pose by sampling their latent space.
๐ Importantly, the palm & arm direction satisfy a desired (condition) 3D direction vector!
๐ This direction is sampled from โ๏ธ ReachingField!
๐งต 6/10
โ๏ธ ReachingField - is a probabilistic 3D ray field encoding directions from which a bodyโs arm & hand likely reach an object without penetration.
๐ Objects near the ground are likely grasped from high above
๐ Objects high above the ground are likely grasped from below
๐งต 5/10
๐ก Our key idea is to perform local-scene reasoning *early on*, so we generate an *already-compatible* guiding-hand & body, so *only* the body needs a *small* refinement to match the hand.
CWGrasp - consists of three novel models:
๐ ReachingField,
๐ CGrasp,
๐ CReach.
๐งต 4/10
๐ฏ We tackle this with CWGrasp in a divide-n-conquer way
This is inspired by FLEX [Tendulkar et al] that:
๐ generates a guiding hand-only grasp,
๐ generates many random bodies,
๐ post-processes the guiding hand to match the body, & the body to match the guiding hand.
๐งต 3/10
This is challenging ๐ซฃ because:
๐ the body needs to plausibly reach the object,
๐ fingers need to dexterously grasp the object,
๐ hand pose and object pose need to look compatible with each other, and
๐ training datasets for 3D whole-body grasps are really scarce.
๐งต 2/10
๐ข We present CWGrasp, a framework for generating 3D Whole-body Grasps with Directional Controllability ๐
Specifically:
๐ given a grasping object (shown in red color) placed on a receptacle (brown color)
๐ we aim to generate a body (gray color) that grasps the object.
๐งต 1/10
๐ข Short deadline extension (24/2) -- One more week left to submit your application!
16.02.2025 22:42 โ ๐ 6 ๐ 2 ๐ฌ 0 ๐ 0๐ข I am #hiring 2x #PhD candidates to work on Human-centric #3D #ComputerVision at the University of #Amsterdam!
The positions are funded by an #ERC #StartingGrant.
For details and for submitting your application please see:
werkenbij.uva.nl/en/vacancies...
๐ Deadline: Feb 16 ๐
Pls RT
Permanent Assistant Professor (Lecturer) position in Computer Vision @bristoluni.bsky.social [DL 6 Jan 2025]
This is a research+teaching permanent post within MaVi group uob-mavi.github.io in Computer Science. Suitable for strong postdocs or exceptional PhD graduates.
t.co/k7sRRyfx9o
1/2
If you are at #BMVC2024 , I will give a talk (remotely) at the ANIMA 2024 interdisciplinary workshop on "non-invasive human motion characterization".
๐ Room M2. Thursday at 14:00++ (UK).
๐ Talk title: "Towards 3D Perception and Synthesis of Humans in Interaction".