Lily Goli's Avatar

Lily Goli

@lilygoli.bsky.social

PhD student at University of Toronto, Research Intern at Waabi, ex. Google DeepMind, 3D Vision Enthusiast

211 Followers  |  88 Following  |  18 Posts  |  Joined: 27.11.2024  |  1.6768

Latest posts by lilygoli.bsky.social on Bluesky

Post image

Didn't quite make it to SIGGRAPH? Consider submitting to SGP, the abstract deadline is tomorrow: sgp2025.my.canva.site/submit-page-...

03.02.2025 20:33 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

I need to either read or write a blog post about getting scooped!
It’s becoming outrageous.

15.01.2025 14:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Come check out our work on semi-amortized inference for cryoEM today at NeurIPS!

πŸ“… Friday, Dec 13, 11 AM - 2 PM
πŸ“Œ East Exhibit Hall A-C, Poster #1105

Paper: arxiv.org/abs/2406.10455
Website: shekshaa.github.io/semi-amortized-cryoem/

With: Shayan Shekarforoush, David Lindell and David Fleet

13.12.2024 17:24 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

see you at #neurips2024? 🀩

09.12.2024 18:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That words my frustration perfectly! Tho the other side of the coin is that you’ll probably become an expert in that subject which can be useful.

07.12.2024 16:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Disclaimer: these are personalized conclusions i.e. your personality has a huge effect on how accurate these thoughts are in your case.

Note: picture is from my breakfast at GDM.

06.12.2024 21:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thought 3:
Developing your own code is a great educational process but always be aware of the open-source code out there. Probably a polished released code from months of effort is more useful/better than your code that was written in half a day and was not tested extensively.

06.12.2024 21:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thought 2: collaboration
Good collaborators can have so much impact on your progress, however you shouldn’t necessarily start a collaboration with every brilliant person out there. Only do so if you have a clear idea of how you all fit into the project. Or find a project that fits the collaboration

06.12.2024 21:46 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Is that exciting tho? For me, not as much as diving into something new.

Is it wise to switch focus often? Probably not. There’s a balanceβ€”and with the field moving so fast, it often leans toward sticking with what you know best.

06.12.2024 21:46 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Thought1: speed
As PhD students we usually want to maximize publications which might lead to writing quick papers. Can a paper written relatively fast be a good paper? I would argue: only if it’s building on your previous good work, i.e. you already know a lot about that subject. /

06.12.2024 21:46 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Breakfast photo

Breakfast photo

Wrapping up my year as a Student Researcher at Google DeepMind today!πŸ₯² It’s been an amazing experience (proof by pictureπŸ˜…).

Excited to join Waabi in the new year and do some cool research on robustness in πŸš— πŸ€–!

I reflected on the past 3.5 years of my PhD todayβ€”here are some thoughts/

06.12.2024 21:46 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I have not used it in a loss before, but I do agree that if the epipolar error is not too high or too low compared to your overall stats, you probably can't rely on it to infer anything.

02.12.2024 22:57 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This was my last work at Google DeepMind. An amazing experience. I am very happy that I had the chance to work with my great collaborators on this project: Sara Sabour, Mark Matthews, @marcusabrubaker.bsky.social, Dmitry Lagun, Alec Jacobson, David Fleet, Saurabh Saxena and @taiyasaki.bsky.social

02.12.2024 03:14 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

…and with that we get good quality masks, without human annotation or synthetic supervision!
Look at our website for more results!
romosfm.github.io

02.12.2024 03:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We train a tiny MLP that classifies SAMv2 features as moving or static given the weak supervisory signal from high and low error masks. These features help complete the motion masks over the video effectively!

02.12.2024 03:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

so...how does it work?

We find the Fundamental matrix between each two adjacent frames in the video with RANSAC. We then identify parts of the frame that have a very low or a very high epipolar error, as weak supervision signals to find the moving objects. Now, how do we complete these signals?

02.12.2024 03:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Okay but why care about motion masks?
We show that good motion masks improve SfM performance, making COLMAP+our masks the SOTA on synthetic benchmarks. We also collect a real evaluation dataset with GT camera pose using a robotic arm, to evaluate our method in real casual captures.

02.12.2024 03:14 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Our masks are robust to slow or fast camera movements and can find multiple moving objects even when they are in the background. Look at the pedestrian! 🚢

02.12.2024 03:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

First and foremost: some results!!
In RoMo an optimization process disentangles camera ego motion from scene motion, yielding masks for moving objects πŸ›΅

02.12.2024 03:14 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Hello everyone!! πŸ‘‹
Excited to be here and share our latest work to get started!

RoMo: Robust Motion Segmentation Improves Structure from Motion

romosfm.github.io

Boost the performance of your SfM pipeline on dynamic scenes! πŸš€ RoMo masks dynamic objects in a video, in a zero-shot manner.

02.12.2024 03:14 β€” πŸ‘ 28    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2

@lilygoli is following 20 prominent accounts