Matheus Gadelha's Avatar

Matheus Gadelha

@gadelha.bsky.social

Research Scientist at Adobe Research. ML/3D/Graphics. http://mgadelha.me

1,399 Followers  |  277 Following  |  78 Posts  |  Joined: 03.07.2023  |  1.7798

Latest posts by gadelha.bsky.social on Bluesky

Post image

I wrote a notebook for a lecture/exercice on image generation with flow matching. The idea is to use FM to render images composed of simple shapes using their attributes (type, size, color, etc). Not super useful but fun and easy to train!
colab.research.google.com/drive/16GJyb...

Comments welcome!

27.06.2025 16:52 โ€” ๐Ÿ‘ 40    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Oh nvm I read โ€œour ICCV paperโ€ฆโ€ haha

20.06.2025 20:26 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Are the results out? I see nothing in OpenReview :-(

20.06.2025 20:21 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
CVPR 2025 Workshop List

For folks attending CVPR: is there a website where I can see the list of workshops, their location AND time? Day and time are empty when I access cvpr.thecvf.com/Conferences/...

11.06.2025 04:04 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I will be in Nashville until Saturday for CVPR'25 \o/

DM if you want to meet!

09.06.2025 19:53 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Wilhem receiving the award on stage

Wilhem receiving the award on stage

๐Ÿ…Honored to have been awarded at #Eurographics25 for our paper on #LipschitzPruning to speed-up SDF rendering!

๐Ÿ‘‰ The paper's page: wbrbr.org/publications...

Congrats to @wbrbr.bsky.social, M. Sanchez, @axelparis.bsky.social, T. Lambert, @tamyboubekeur.bsky.social, M. Paulin and T. Thonat!

19.05.2025 09:54 โ€” ๐Ÿ‘ 25    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
GitHub - JelteF/PyLaTeX: A Python library for creating LaTeX files A Python library for creating LaTeX files. Contribute to JelteF/PyLaTeX development by creating an account on GitHub.

I usually write a python script that prints some .npy file as tex a .tex table. It is also useful as an easy way to share results throughout the project, so I consider this as part of the codebase. I heard that people that are more serious about such practice use smth like github.com/JelteF/PyLaTeX

14.05.2025 17:46 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Comecei a fazer por causa disso e por causa do vim, mas atรฉ no overleaf รฉ vantagem

14.05.2025 00:47 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Eu n uso pra coisas que faรงo sozinho (anotaรงรตes, apresentaรงรตes, etc). Mas pra trabalho colaborativo รฉ meio q obrigatรณrio. Estudantes vรฃo entrar em revolta se vc usar git hahaha

14.05.2025 00:45 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

NeurIPS and SIGGRAPH Asia deadline are coming.

Make your life easier: read this thread.

14.05.2025 00:09 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Let's gooo!!! \o/

Probably my first time visiting Brazil for professional reasons :-)

28.04.2025 19:48 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What features did you find particularly useful?

I liked asking questions about the code base and the tab completion seems nice, but I've been getting unhelpful suggestions for all the "agentic" stuff.

14.04.2025 23:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

By popular demand, we are extending #CVPR2025 coverage to Bluesky. Stay tuned!

27.02.2025 21:07 โ€” ๐Ÿ‘ 124    ๐Ÿ” 17    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 2
Video thumbnail

Exciting news! MegaSAM code is out๐Ÿ”ฅ & the updated Shape of Motion results with MegaSAM are really impressive! A year ago I didn't think we could make any progress on these videos: shape-of-motion.github.io/results.html
Huge congrats to everyone involved and the community ๐ŸŽ‰

24.02.2025 18:52 โ€” ๐Ÿ‘ 70    ๐Ÿ” 16    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

*it

23.02.2025 19:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I understand the sentiment, but it is important for people to know that is currently does not reflect reviewer guidelines at CVPR: cvpr.thecvf.com/Conferences/...

โ€œ(โ€ฆ) you should include specific feedback on ways the authors can improve their papers.โ€

23.02.2025 18:54 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

But is Meta indeed cancelling professional fact checking worldwide? Their response to the Brazilian supreme court inquiry said the dismissal of professional fact checking was exclusive to the US

21.02.2025 17:52 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Late to post, but excited to introduce CUT3R!

An online 3D reasoning framework for many 3D tasks directly from just RGB. For static or dynamic scenes. Video or image collections, all in one!

Project Page: cut3r.github.io
Code and Model: github.com/CUT3R/CUT3R

18.02.2025 17:03 โ€” ๐Ÿ‘ 34    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Those plots are so cool! \o/

18.02.2025 22:10 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Itโ€™s a plugin for neovim that has some stuff similar to cursor; it is open source and you can configure it to use the model of your choice

13.02.2025 02:10 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I am using Avante + vim and I am pretty happy about it

13.02.2025 02:03 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐ŸŒŒ๐Ÿ›ฐ๏ธ๐Ÿ”ญWanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"!

arxiv.org/abs/2502.03714

(1/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 56    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5
OSF

New paper about pictures: I identify trends in geometric perspective in my own drawings and photos, and compare them to how the original scenes looked. I discuss what these trends might say about art history and vision science. Published in _Art & Perception_. #visionscience
psyarxiv.com/pq8nb

06.02.2025 22:58 โ€” ๐Ÿ‘ 19    ๐Ÿ” 6    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Video thumbnail

"๐‘๐š๐๐ข๐š๐ง๐ญ ๐…๐จ๐š๐ฆ: Real-Time Differentiable Ray Tracing"

A mesh-based 3D represention for training radiance fields from collections of images.

radfoam.github.io
arxiv.org/abs/2502.01157

Project co-lead by my PhD students Shrisudhan Govindarajan and Daniel Rebain, and w/ co-advisor Kwang Moo Yi

05.02.2025 18:59 โ€” ๐Ÿ‘ 54    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

It would be nice to mention someone and have them receive an e-mail with a subject like "You (Reviewer HjKl) were mentioned in a discussion for Paper 447809 in OpenReview"

03.02.2025 18:31 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

#CVPR2025

Is there any way in OpenReview to "mention" a reviewer in a discussion? I think reviewers get the e-mail with whatever message gets posted in the discussion that is sent to them, but they have no idea if Reviewer HjKl is them or someone else...

03.02.2025 18:31 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I will keep repeating this until I convince everyone I work with or I will die trying.

23.01.2025 18:34 โ€” ๐Ÿ‘ 8    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

I think this is a great idea!

17.01.2025 22:57 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

It would *really* help if OpenReview showed how many reviews a paper already had on the reviewer assignment page

15.01.2025 19:12 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This is a great speaker lineup and I will definitely try to attend.

I can't help but think though: if everyone is trying to stand out you will stand out by not trying to :-)

(I am obviously kidding, the workshop is about more than that, visit the webpage to learn more)

14.01.2025 06:37 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@gadelha is following 20 prominent accounts