Ulyana Piterbarg's Avatar

Ulyana Piterbarg

@upiter.bsky.social

PhD at NYU studying reasoning, decision-making, and open-endedness alum of MIT | prev: Google, MSR, MIT CoCoSci https://upiterbarg.github.io/

1,532 Followers  |  329 Following  |  5 Posts  |  Joined: 11.11.2024  |  1.6392

Latest posts by upiter.bsky.social on Bluesky

Post image

1/13 New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧡

04.03.2025 18:15 β€” πŸ‘ 57    πŸ” 17    πŸ’¬ 2    πŸ“Œ 3

Thank you to @sloanfoundation.bsky.social for this generous award to our lab. Hopefully this will bring us closer to building truly general-purpose robots!

18.02.2025 16:50 β€” πŸ‘ 22    πŸ” 4    πŸ’¬ 3    πŸ“Œ 0

(Many) more details in our paper! arxiv.org/abs/2410.02749

12.02.2025 20:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

LMs trained to synthesize programs by repeatedly editing their own generations produce more diverse code compared to baselines

This improves the trade-off between test-time FLOPs and pass@k

12.02.2025 20:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our approach introduces an algorithm, LintSeq, for sampling across interdependent lines in source code by using a code linter

With LintSeq, we can generate plausible edit *trajectories* for any source code file, covering possible ways of synthesizing its contents edit-by-edit with no linter errors

12.02.2025 20:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our paper showing that LMs benefit from human-like abstractions for code synthesis was accepted to ICLR! πŸ‡ΈπŸ‡¬

We show that order matters in code gen. -- casting code synthesis as a sequential edit problem by preprocessing examples in SFT data improves LM test-time scaling laws

12.02.2025 20:08 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Video thumbnail

Can we extend the power of world models beyond just online model-based learning? Absolutely!

We believe the true potential of world models lies in enabling agents to reason at test time.
Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.

31.01.2025 19:24 β€” πŸ‘ 20    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

Williams and Zipser (1989) is a classic one! leech.cybernoid.gr/files/text/p...

30.01.2025 17:47 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Scaling Laws for Pre-training Agents and World Models The performance of embodied agents has been shown to improve by increasing model parameters, dataset size, and compute. This has been demonstrated in domains from robotics to video games, when generat...

Finally finally finally some scaling curves for imitation learning in the large-scale-data regime: arxiv.org/abs/2411.04434

20.01.2025 14:48 β€” πŸ‘ 54    πŸ” 8    πŸ’¬ 2    πŸ“Œ 0
Video thumbnail

Introducing 🧞Genie 2 🧞 - our most capable large-scale foundation world model, which can generate a diverse array of consistent worlds, playable for up to a minute. We believe Genie 2 could unlock the next wave of capabilities for embodied agents 🧠.

04.12.2024 16:01 β€” πŸ‘ 234    πŸ” 60    πŸ’¬ 15    πŸ“Œ 30

Now that @jeffclune.bsky.social and @joelbot3000.bsky.social are here, time for an Open-Endedness starter pack.

go.bsky.app/MdVxrtD

20.11.2024 07:08 β€” πŸ‘ 105    πŸ” 32    πŸ’¬ 16    πŸ“Œ 5

@upiter is following 20 prominent accounts