10 new CS professors! π₯³
@anandbhattad.bsky.social @uthsav.bsky.social @gligoric.bsky.social @murat-kocaoglu.bsky.social @tiziano.bsky.social
@anandbhattad.bsky.social
Incoming Assistant Professor at Johns Hopkins University | RAP at Toyota Technological Institute at Chicago | web: https://anandbhattad.github.io/ | Knowledge in Generative Image Models, Intrinsic Images, Image-based Relighting, Inverse Graphics
10 new CS professors! π₯³
@anandbhattad.bsky.social @uthsav.bsky.social @gligoric.bsky.social @murat-kocaoglu.bsky.social @tiziano.bsky.social
I decided not to travel to #ICCV2025 because it coincides with Diwali (Oct 20). Diwali often falls near the #CVPR deadline window, but this year overlaps with ICCV. I understand itβs hard to avoid all global holidays, but I hope future conferences can keep this in mind when selecting dates.
06.10.2025 19:00 β π 2 π 0 π¬ 0 π 0I will be recruiting a few students for Fall 2026. In particular, I will strongly consider a PhD applicant with training in applied/computational mechanics and computer vision/machine learning. If you or someone you know has this background, please contact me.
06.10.2025 18:39 β π 4 π 1 π¬ 0 π 0So You Want to Be an Academic?
A couple of years into your PhD, but wondering: "Am I doing this right?"
Most of the advice is aimed at graduating students. But there's far less for junior folks who are still finding their academic path.
My candid takes: anandbhattad.github.io/blogs/jr_gra...
Thanks Andreas and the Scholar Inbox team! This is by far the best paper recommendation system Iβve come across. No more digging through overwhelming volumes and like the blog says, the right papers just show up in my inbox.
30.06.2025 14:47 β π 2 π 0 π¬ 0 π 0On our blog: Science is moving fast. How do we keep up? #ScholarInbox, developed by the Autonomous Vision Group led by @andreasgeiger.bsky.social, helps researchers stay ahead - by making the discovery of #openaccess papers smarter and more personal: www.machinelearningforscience.de/en/scholar-i...
30.06.2025 12:40 β π 27 π 14 π¬ 1 π 6All slides from the #cvpr2025 (@cvprconference.bsky.social ) workshop "How to Stand Out in the Crowd?" are now available on our website:
sites.google.com/view/standou...
This is probably one of the best talks and slides I have ever seen. I was lucky to see this live! Great talk again :)
23.06.2025 19:24 β π 3 π 0 π¬ 1 π 0A special shout-out to all the job-market candidates this year: itβs been tough with interviews canceled and hiring freezesπ
After UIUC's blue and @tticconnect.bsky.social blue, Iβm delighted to add another shade of blue to my journey at Hopkins @jhucompsci.bsky.social. Super excited!!
We will be recruiting PhD students, postdocs, and interns. Updates soon on my website: anandbhattad.github.io
Also, feel free to chat with me @cvprconference.bsky.social #CVPR2025
Iβm immensely grateful to my mentors, friends, colleagues, and family for their unwavering support.π
At JHU, I'll be starting a new lab: 3P Vision Group. The β3Psβ are Pixels, Perception & Physics.
The lab will focus on 3 broad themes:
1) GLOW: Generative Learning Of Worlds
2) LUMA: Learning, Understanding, & Modeling of Appearances
3) PULSE: Physical Understanding and Learning of Scene Events
Iβm thrilled to share that I will be joining Johns Hopkins Universityβs Department of Computer Science (@jhucompsci.bsky.social, @hopkinsdsai.bsky.social) as an Assistant Professor this fall.
02.06.2025 19:46 β π 8 π 2 π¬ 1 π 2FastMap: Revisiting Dense and Scalable Structure from Motion
Jiahao Li, Haochen Wang, @zubair-irshad.bsky.social, @ivasl.bsky.social, Matthew R. Walter, Vitor Campagnolo Guizilini, Greg Shakhnarovich
tl;dr: replace BA with epipolar error+IRLS; fully PyTorch implementation
arxiv.org/abs/2505.04612
[2/2] However, if we treat 3D as a real task, such as building a usable environment, then these projective geometry details matter. It also ties nicely to Ross Girshickβs talk at our RetroCV CVPR workshop last year, which you highlighted.
29.04.2025 16:56 β π 1 π 0 π¬ 0 π 0[1/2] Thanks for the great talk and for sharing it online for those who couldn't attend 3DV. I liked your points on our "Shadows Don't Lie" paper. I agree that if the goal is simply to render 3D pixels, then subtle projective geometry errors that are imperceptible to humans are not a major concern.
29.04.2025 16:56 β π 1 π 0 π¬ 1 π 0Congratulations and welcome to TTIC! π₯³π
15.04.2025 13:03 β π 1 π 0 π¬ 0 π 0By βremove,β I meant masking the object and using inpainting to hallucinate what could be there instead.
02.04.2025 05:08 β π 0 π 0 π¬ 0 π 0This is really cool work!
30.03.2025 00:14 β π 7 π 1 π¬ 1 π 0Thanks Noah! Glad you liked it :)
02.04.2025 04:51 β π 0 π 0 π¬ 0 π 0[2/2] We also re-run the full pipeline *after each removal*. This matters: new objects can appear, occluded ones can become visible, etc, making the process adaptive and less ambiguous.
Fig above shows a single pass. Once the top bowl is gone, the next "top" bowl gets its own diverse semantics too
[1/2] Not really... there's quite a bit of variation.
When we remove the top bowl, we get diverse semantics: fruits, plants, and other objects that just happen to fit the shape. As we go down, it becomes less diverse: occasional flowers, new bowls in the middle, & finally just bowls at the bottom.
[10/10] This project began while I was visiting Berkeley last summer. Huge thanks to Alyosha for the mentorship and to my amazing co-author Konpat Preechakul. We hope this inspires you to think differently about what it means to understand a scene.
π visualjenga.github.io
π arxiv.org/abs/2503.21770
[9/10] Visual Jenga is a call to rethink what scene understanding should mean in 2025 and beyond.
Weβre just getting started. Thereβs still a long way to go before models understand scenes like humans do. Our task is a small, playful, and rigorous step in that direction.
[8/10] This simple idea surprisingly scales to a wide range of scenes: from clean setups like a cat on a table or a stack of bowls... to messy, real-world scenes (yes, even Alyoshaβs office).
29.03.2025 19:36 β π 1 π 0 π¬ 2 π 0[7/10] Why does this work? Because generative models have internalized asymmetries in the visual world.
Search for βcupsβ β Youβll almost always see a table.
Search for βtablesβ β You rarely see cups.
So: P(table | cup) β« P(cup | table)
We exploit this asymmetry to guide counterfactual inpainting
[6/10] We measure dependencies by masking each object, then using a large inpainting model to hallucinate what should be there. If the replacements are diverse, the object likely isn't critical. If it consistently reappears, like the table under the cat, itβs probably a support.
29.03.2025 19:36 β π 1 π 0 π¬ 1 π 0[5/10] To solve Visual Jenga, we start with a surprising baseline without explicit physical reasoning & any 3D, simulation, or dynamics. Instead, we propose a training-free, generative approach that infers object removal order by exploiting statistical co-occurrence learned by generative models.
29.03.2025 19:36 β π 0 π 0 π¬ 1 π 0[4/10] The goal of Visula Jenga is simple:
1) Remove one object at a time
2) Generate a sequence down to the background
3) Keep every intermediate scene physically & geometrically stable
[3/10] Probing this understanding motivates our new task: Visual Jenga, a challenge beyond passive observation.
Like in the game of Jenga, success demands understanding structural dependencies. Which objects can you remove without collapsing the scene? Thatβs where true understanding begins.
[2/10] Todayβs models can name everything in an image.
But do they understand how a scene holds together?
Inspired by Biedermanβs classic work on scene perception + influential efforts by Hoiem et al, Bottou et al, & others, we ask: Can a model understand support structure and object dependencies?