Ben Eysenbach's Avatar

Ben Eysenbach

@ben-eysenbach.bsky.social

Assistant professor at Princeton CS working on reinforcement learning and AI/ML. Site: https://ben-eysenbach.github.io/ Lab: https://princeton-rl.github.io/

252 Followers  |  1 Following  |  8 Posts  |  Joined: 26.01.2025
Posts Following

Posts by Ben Eysenbach (@ben-eysenbach.bsky.social)

πŸ€–Excited to share SLAP,
@yijieisabelliu.bsky.social 's new algorithm using RL to provide better skills for planning!

Check out the website for code, videos, and pre-trained models: github.com/isabelliu0/S...

05.11.2025 16:27 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Kids spend years playing with blocks, building spatial+arithmetic skills. Today, AI models just read.

While AI research often conflates reasoning with language models, block-building lets us study how embodied reasoning might emerge from exploration and trial-and-error learning!

16.10.2025 23:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

🚨 Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind

πŸ“£ Call for: Findings (4- or 8-page) + Tutorials tracks

πŸŽ™οΈ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social

🌐 Learn more: data-brain-mind.github.io

04.08.2025 15:28 β€” πŸ‘ 31    πŸ” 10    πŸ’¬ 0    πŸ“Œ 3

New research directions:
* model-based RL with NF models,
* goal/language-conditioned NF foundation policies,
* NFs for collocation-based planning,
* goal-conditioned NF value functions (as control barrier functions, as Lyapunov functions).
πŸ‘†Join/scoop us -- we can't do it all!

05.06.2025 18:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

2/ Much of my past research is about avoiding density estimation in RL, because I've assumed that it's difficult and fickle. But, if NFs make it easy to do high-dim density estimation, there are lots of new RL algorithms to be developed:

05.06.2025 18:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Check out @raj-ghugare.bsky.social's new paper on the surprising effectiveness of normalizing flows (NF) in RL πŸš€

This project changed my mind in 2 ways:
1/ Diffusion policies, flow-models, and EBMs have become ubiquitous in RL. Turns out NFs can perform as well -- no ODEs/SDEs required!

05.06.2025 18:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

While we still don't understand precisely why depth helps so much, the benefits seem correlated with exploration. Thought experiment: What if the answer to the exploration problem in RL were to just increase network depth?

21.03.2025 16:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

tldr: increase the depth of your RL networks by several orders of magnitude.

Our new paper shows that very very deep networks are surprisingly useful for RL, if you use resnets, layer norm, and self-supervised RL!

Paper, code, videos: wang-kevin3290.github.io/scaling-crl/

21.03.2025 16:17 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Excited to share new work led by @vivekmyers.bsky.social and @crji.bsky.social that proves you can learn to reach distant goals by solely training on nearby goals. The key idea is a new form of invariance. This invariance implies generalization w.r.t. the horizon.

06.02.2025 01:13 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0