Michael J. Black's Avatar

Michael J. Black

@michael-j-black.bsky.social

Director, Max Planck Institute for Intelligent Systems; Chief Scientist Meshcapade; Speaker, Cyber Valley. Building 3D humans. https://ps.is.mpg.de/person/black https://meshcapade.com/ https://scholar.google.com/citations?user=6NjbexEAAAAJ&hl=en&oi=ao

3,039 Followers  |  98 Following  |  53 Posts  |  Joined: 19.11.2024  |  2.0625

Latest posts by michael-j-black.bsky.social on Bluesky

InteractVLM: 3D Interaction Reasoning from 2D Foundational Models
interactvlm.is.tue.mpg.de

InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
interdyn.is.tue.mpg.de

Reconstructing Animals and the Wild
raw.is.tue.mpg.de

Workshop paper:

Generative Zoo
genzoo.is.tue.mpg.de

09.06.2025 08:05 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

ChatGarment: Garment Estimation, Generation and Editing via Large Language Models
chatgarment.github.io

ChatHuman: Chatting about 3D Humans with Tools
chathuman.github.io

PICO: Reconstructing 3D People In Contact with Objects
pico.is.tue.mpg.de

09.06.2025 08:05 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Here are all the CVPR projects that Iโ€™m part of in one thread.

Conference papers:

PromptHMR: Promptable Human Mesh Recovery
yufu-wang.github.io/phmr-page/

DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models
radualexandru.github.io/difflocks/

09.06.2025 08:05 โ€” ๐Ÿ‘ 10    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Video thumbnail

Final drop in our #CVPR2025 video series: PICO ๐Ÿค๐Ÿ“ฆ

Watch how we reconstruct realistic human-object interaction from just one imageโ€”with dense contact and mesh fitting!

๐Ÿ‘‹ Visit our booth 1333 at CVPR.

๐Ÿ”— Paper link in the thread.

#3DBody #AI #SMPL

05.06.2025 13:13 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

My dream has been to take a photo of a person and extract the 2D sewing pattern of their clothing and then turn it into a 3D garment. ChatGarment does exactly this plus it lets you edit the garment, or create a completely new one, using text prompts. chatgarment.github.io At CVPR2025!

28.05.2025 09:33 โ€” ๐Ÿ‘ 10    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

To estimate 3D humans from video in world coordinates we add side information to prompt the process. Prompts include bounding boxes, face detections, segmentation masks, and text descriptions. It's currently the most accurate video-based method out there. See us at CVPR2025.

23.05.2025 15:42 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Check out DiffLocks, appearing at #CVPR2025. From a single image, we estimate about 100K hair strands that you can then physically simulate. We use a dataset of 40K synthetic hair images with ground truth strands. It's all available for research purposes.

20.05.2025 13:09 โ€” ๐Ÿ‘ 20    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Yay, @cvprconference.bsky.social, weโ€™re in! ๐ŸŽ‰#CVPR2025

5 papers accepted, 5 going live ๐Ÿš€

Catch PromptHMR, DiffLocks, ChatHuman, ChatGarment & PICO at Booth 1333, June 11โ€“15.

Details about the papers in the thread! ๐Ÿ‘‡

#3DBody #SMPL #GenerativeAI #MachineLearning

15.05.2025 12:49 โ€” ๐Ÿ‘ 5    ๐Ÿ” 4    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0
Video thumbnail

๐Ÿ”ฅ Heading to the #ICRA2025?

Join us on May 23, 2pm (Room 316) for MoCapade: markerless motion capture from any video!

Powered by PromptHMR (CVPR 2025). No suits, no markersโ€”just motion. ๐Ÿ•บ๐Ÿ’ป

#AI #3DMotion #SMPL #Robotics #ICRA #Meshcapade

13.05.2025 14:33 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Improved foot-ground contact coming soon to MoCapade3.0.

08.05.2025 06:38 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐ŸŽฌ Join Meshcapade at FMX!

See how to turn video or text into ready-to-use 3D motionโ€”no suits or markers needed.

Workshop: May 8, 10:00 AM.

Perfect for animation, VFX & game dev!

๐Ÿ“ Info: fmx.de/en/program/p...

#FMX2025 #3DMotion

28.04.2025 09:20 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image Post image

St4RTrack: Simultaneous 4D Reconstruction and Tracking in the World

Haiwen Feng, @junyi42.bsky.social, @qianqianwang.bsky.social, Yufei Ye, Pengcheng Yu, @michael-j-black.bsky.social, Trevor Darrell, @akanazawa.bsky.social

DUSt3R-like framework

arxiv.org/abs/2504.13152

18.04.2025 13:05 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Meshcapade: From zero to game-ready assets in seconds (GDC 2025 Presentation)
YouTube video by Meshcapade Meshcapade: From zero to game-ready assets in seconds (GDC 2025 Presentation)

Missed us at GDC?
Watch Part 1 of our talk here ๐Ÿ‘‰ youtu.be/0jCTiQMutow

๐Ÿšถ Motion capture with MoCapade
๐ŸŽฎ Import directly into Unreal Engine
๐ŸŽญ Retarget to any character
๐Ÿ‘€ Bonus: sneak peek at 3D hair & realtime single-cam mocap

๐Ÿ•ด๏ธ No suits. ๐Ÿ“ No markers. ๐Ÿคณ Just one camera.

18.04.2025 12:06 โ€” ๐Ÿ‘ 4    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thanks for the pointer!

12.04.2025 05:21 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The AI tariff?

Will there be an AI tariff? TL;DR: The societies that will "win" the AI race will not be those that develop the technology first. It will be those that are best able to manage the long-term social disruption AI will cause.
perceiving-systems.blog/en/news/what...
on Medium
medium.com/@black_51980...

11.04.2025 12:53 โ€” ๐Ÿ‘ 13    ๐Ÿ” 3    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Video thumbnail

One shot. One town. Endless dancing. ๐Ÿ•บ๐Ÿ’ƒ๐ŸŽถ

This one-take Unreal animation uses Meshcapade to bring every character to life. No mocap suits, just seamless motion from start to finish.

Stylized, cinematic, and full of vibes; exactly how digital animation should feel ๐Ÿ“นโœจ

#3DAnimation #MarkerlessMocap

28.03.2025 14:23 โ€” ๐Ÿ‘ 10    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The arXiv paper for PRIMAL is now online. It's a data-driven, interactive avatar, that can be controlled by varied commands, runs in a game engine, and is responsive to perturbations (without physics simulation). arxiv.org/abs/2503.17544

25.03.2025 10:38 โ€” ๐Ÿ‘ 15    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
PRIMAL: Physically Reactive and Interactive Motor Model for Avatar Learning
YouTube video by Yan Zhang PRIMAL: Physically Reactive and Interactive Motor Model for Avatar Learning

๐Ÿ‘จโ€๐ŸŽค Ever seen an Interactive Generative avatar running inside @unrealengine.bsky.social?

Check out our latest work, PRIMAL, in collaboration with Max Planck Institute for Intelligent Systems & Stanford University - live demo at @officialgdc.bsky.social! ๐ŸŽฎ

www.youtube.com/watch?v=-Gcp...

21.03.2025 21:17 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Video thumbnail

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Happy to announce that the code for our paper Gaussian Garments is now public!

Link: github.com/eth-ait/Gaus...

Gaussian Garments uses a combination of 3D meshes and Gaussian splatting to reconstruct photorealistic simulation-ready digital garments from multi-view videos. ๐Ÿงต

20.03.2025 15:46 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
CameraHMR: Aligning People with Perspective (3DV 2025)
YouTube video by Michael Black CameraHMR: Aligning People with Perspective (3DV 2025)

The CameraHMR video is now on Youtube. This is currently the most accurate single-image method for estimating 3D human shape and pose. The paper will be presented at 3DV. The code and data are all on-line here: camerahmr.is.tue.mpg.de
youtu.be/v3WzpjXpknc

17.03.2025 14:10 โ€” ๐Ÿ‘ 14    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

๐ŸŽฎ Motion Generation in Unreal, made SMPL.

Experience real-time reactive behaviorโ€”characters adapt instantly to your input. With controllable generative 3D motion, every move is unique. Real-time motion blending keeps it smooth.

๐Ÿš€ See it at Booth C1821!

#3DAnimation #MotionGeneration #GameDev #GDC

13.03.2025 11:03 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

And it just keeps coming from @meshcapade.com -- next up, real-time markerless motion capture from a single camera. Check it out at #GDC2025. Yes, the founders will be there to dance for you (as in this video) but it's more fun to try it yourself!

05.03.2025 10:47 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Faces, Expressions, Hairโ€”Brought to Life with Meshcapade.

From facial animation to 3D hair, creating digital humans has never been easier.

See it at #GDC2025! Visit Booth C1821 and experience motion, detail, expression.

#UnrealEngine #3DAnimation #MotionCapture #FacialAnimation #genAI #Meshcapade

03.03.2025 09:00 โ€” ๐Ÿ‘ 10    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I'll be at #GDC2025 so if you are there and want to meet, you can probably find me at the @meshcapade.com booth.

19.02.2025 13:53 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
KI-Innovation: Hat Deutschland Antworten auf ChatGPT, DeepSeek & Co.? โ€“ Shortcut | DER SPIEGEL
YouTube video by DER SPIEGEL KI-Innovation: Hat Deutschland Antworten auf ChatGPT, DeepSeek & Co.? โ€“ Shortcut | DER SPIEGEL

Meshcapade was just featured again in DER SPIEGEL and their Spiegel-Shortcut podcast! ๐ŸŽ™๏ธโœจ

The podcast highlights the diversity of ideas shaping the future - something weโ€™re very proud to part of!

โ–ถ๏ธ Watch on Youtube: www.youtube.com/watch?v=gN0X...

11.02.2025 19:13 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Turning videos into 3D humans has never been easier or more accurate. MoCapade 3.0 improves everything over 2.0 while adding 3D camera tracking (and output) and multi-person capture. Try out the new state-of-the-art. Of course, there's more to come.

06.02.2025 06:32 โ€” ๐Ÿ‘ 34    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
MoCapade 3.0: Meshcapade's markerless mocap is live ๐Ÿฅณ ๐ŸŽ‰
YouTube video by Meshcapade MoCapade 3.0: Meshcapade's markerless mocap is live ๐Ÿฅณ ๐ŸŽ‰

MoCapade 3.0 goes LIVE today ๐Ÿฅณ ๐ŸŽ‰

๐Ÿ‘ฏ Multi-person capture
๐ŸŽฅ 3D camera motion
๐Ÿ™Œ Detailed hands & gestures
โซ New GLB, MP4 and SMPL export formats

All from a single camera. Any camera! ๐Ÿ“น ๐Ÿคณ ๐Ÿ“ธ
Ready, set, CAPTURE! ๐Ÿ‚

youtu.be/jizULlZTAR8

05.02.2025 16:00 โ€” ๐Ÿ‘ 11    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Thanks, but this version of CameraHMR estimates SMPL and not SMPL-X. The default in SMPL is flat hands. So we can't really take credit for this! For a commercial version with SMPL-X hands, try out meshcapade.me

31.01.2025 08:56 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

"AI is like electricity: the decisive factor is not who invented it, but who knows how to use it best." โšก

28.01.2025 11:30 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Mocap Suit - Mocap Video - TEST  (subtitles)
YouTube video by alerender Mocap Suit - Mocap Video - TEST (subtitles)

Such a great comparison video showing what's possible with @meshcapade.com using a single video compared to using a mocap suit.

From 3D artist, Alejandro de Pasquale.

www.youtube.com/watch?v=ess6...

28.01.2025 18:36 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@michael-j-black is following 19 prominent accounts