Dimitris Tzionas's Avatar

Dimitris Tzionas

@dimtzionas.bsky.social

Assistant Professor for 3D Computer Vision at University of Amsterdam. 3D Human-centric Perception & Synthesis: bodies, hands, objects. Past: MPI for Intelligent Systems, Univ. of Bonn, Aristotle Univ. of Thessaloniki Website: https://dtzionas.com

398 Followers  |  121 Following  |  14 Posts  |  Joined: 23.11.2024  |  1.5128

Latest posts by dimtzionas.bsky.social on Bluesky

Post image

Your #ICCV2025 paper got rejected? Give it another try and submit to our proceedings track!

Your #ICCV2025 paper got accepted? Congrats! Give it even more visibility by joining our nectar track.

More info: sites.google.com/view/neuslam...

27.06.2025 16:48 โ€” ๐Ÿ‘ 11    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Video thumbnail

Why does 3D human-object reconstruction fail in the wild or get limited to a few object classes? A key missing piece is accurate 3D contact. InteractVLM (#CVPR2025) uses foundational models to infer contact on humans & objects, improving reconstruction from a single image. (1/10)

15.06.2025 12:23 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

๐—œ๐—ป๐˜๐—ฒ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ฉ๐—Ÿ๐— : ๐Ÿฏ๐—— ๐—œ๐—ป๐˜๐—ฒ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐—ฅ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐—ณ๐—ฟ๐—ผ๐—บ ๐Ÿฎ๐—— ๐—™๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—น ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€
Sai Kumar Dwivedi, Dimitrije Antiฤ‡, Shashank Tripathi ... Dimitrios Tzionas
arxiv.org/abs/2504.05303
Trending on www.scholar-inbox.com

09.04.2025 06:00 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

If you are at #3DV2025 @3dvconf.bsky.social today, George and Omid will be presenting our work on '3D Whole-Body Grasps with Directional Controllability'. Pass by to discuss human avatars and interactions! ๐Ÿ˜ƒ
๐Ÿ‘‰ Poster 6-15
๐Ÿ‘‰ Local time 15:30 - 17:00
๐Ÿ‘‰ Website: gpaschalidis.github.io/cwgrasp/

28.03.2025 00:46 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
CWGrasp

CWGrasp will be presented @3dvconf.bsky.social #3DV2025

Authors: G. Paschalidis, R. Wilschut, D. Antiฤ‡, O. Taheri, D. Tzionas
Colab: University of Amsterdam, MPI for Intelligent Systems
Project: gpaschalidis.github.io/cwgrasp
Paper: arxiv.org/abs/2408.16770
Code: github.com/gpaschalidis...

๐Ÿงต 10/10

14.03.2025 18:44 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
GitHub - gpaschalidis/CWGrasp Contribute to gpaschalidis/CWGrasp development by creating an account on GitHub.

๐Ÿงฉ Our code is modular - each model has its own repo.
You can easily integrate these into your code & build new research!

๐Ÿงฉ CGrasp: github.com/gpaschalidi...
๐Ÿงฉ CReach: github.com/gpaschalidi...
๐Ÿงฉ ReachingField: github.com/gpaschalidi...
๐Ÿงฉ CWGrasp: github.com/gpaschalidi...

๐Ÿงต 9/10

14.03.2025 18:36 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

โš™๏ธ CWGrasp:
๐Ÿ‘‰ requires 500x less samples & runs 10x faster than SotA,
๐Ÿ‘‰ produces grasps that are perceived as more realistic than SotA ~70% of the times,
๐Ÿ‘‰ works well for objects placed at various "heights" from the floor,
๐Ÿ‘‰ generates both right- & left-hand grasps.

๐Ÿงต 8/10

14.03.2025 18:36 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ‘‰ We condition both CGrasp & CReach on the same direction.
๐Ÿ‘‰ This produces a hand-only guiding grasp & a reaching body that are already mutually compatible!
๐ŸŽฏ Thus, we need to conduct a *small* refinement *only* for the body so that its fingers match the guiding hand!

๐Ÿงต 7/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โš™๏ธ CGrasp & CReach - generate a hand-only grasp & reaching body, respectively, with varied pose by sampling their latent space.
๐Ÿ‘‰ Importantly, the palm & arm direction satisfy a desired (condition) 3D direction vector!
๐Ÿ‘‰ This direction is sampled from โš™๏ธ ReachingField!

๐Ÿงต 6/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โš™๏ธ ReachingField - is a probabilistic 3D ray field encoding directions from which a bodyโ€™s arm & hand likely reach an object without penetration.
๐Ÿ‘‰ Objects near the ground are likely grasped from high above
๐Ÿ‘‰ Objects high above the ground are likely grasped from below

๐Ÿงต 5/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ’ก Our key idea is to perform local-scene reasoning *early on*, so we generate an *already-compatible* guiding-hand & body, so *only* the body needs a *small* refinement to match the hand.

CWGrasp - consists of three novel models:
๐Ÿ‘‰ ReachingField,
๐Ÿ‘‰ CGrasp,
๐Ÿ‘‰ CReach.

๐Ÿงต 4/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐ŸŽฏ We tackle this with CWGrasp in a divide-n-conquer way

This is inspired by FLEX [Tendulkar et al] that:
๐Ÿ‘‰ generates a guiding hand-only grasp,
๐Ÿ‘‰ generates many random bodies,
๐Ÿ‘‰ post-processes the guiding hand to match the body, & the body to match the guiding hand.

๐Ÿงต 3/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

This is challenging ๐Ÿซฃ because:
๐Ÿ‘‰ the body needs to plausibly reach the object,
๐Ÿ‘‰ fingers need to dexterously grasp the object,
๐Ÿ‘‰ hand pose and object pose need to look compatible with each other, and
๐Ÿ‘‰ training datasets for 3D whole-body grasps are really scarce.

๐Ÿงต 2/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ“ข We present CWGrasp, a framework for generating 3D Whole-body Grasps with Directional Controllability ๐ŸŽ‰
Specifically:
๐Ÿ‘‰ given a grasping object (shown in red color) placed on a receptacle (brown color)
๐Ÿ‘‰ we aim to generate a body (gray color) that grasps the object.

๐Ÿงต 1/10

14.03.2025 18:35 โ€” ๐Ÿ‘ 11    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

๐Ÿ“ข Short deadline extension (24/2) -- One more week left to submit your application!

16.02.2025 22:42 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Vacancy โ€” PhD Positions, Project 'Spatiotemporal Reconstruction of Interacting People for Perceiving Systems' Do you want to help computers see, understand, and assist us, humans, in everyday life?ย  Are you excited with 3D Machine Perception, 3D Human and Object Understanding, 3D Human Avatars, and Machine Le...

๐Ÿ“ข I am #hiring 2x #PhD candidates to work on Human-centric #3D #ComputerVision at the University of #Amsterdam!

The positions are funded by an #ERC #StartingGrant.

For details and for submitting your application please see:
werkenbij.uva.nl/en/vacancies...

๐Ÿ†˜ Deadline: Feb 16 ๐Ÿ†˜

26.01.2025 17:31 โ€” ๐Ÿ‘ 15    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
https://tinyurl.com/BristolCVLectureship

Pls RT
Permanent Assistant Professor (Lecturer) position in Computer Vision @bristoluni.bsky.social [DL 6 Jan 2025]
This is a research+teaching permanent post within MaVi group uob-mavi.github.io in Computer Science. Suitable for strong postdocs or exceptional PhD graduates.
t.co/k7sRRyfx9o
1/2

04.12.2024 17:22 โ€” ๐Ÿ‘ 22    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

If you are at #BMVC2024 , I will give a talk (remotely) at the ANIMA 2024 interdisciplinary workshop on "non-invasive human motion characterization".
๐Ÿ‘‰ Room M2. Thursday at 14:00++ (UK).
๐Ÿ‘‰ Talk title: "Towards 3D Perception and Synthesis of Humans in Interaction".

28.11.2024 02:01 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@dimtzionas is following 19 prominent accounts