REGENT will be presented as an Oral at ICLR 2025 in Singapore πΈπ¬, given to the top 1.8% of 11672 submissions! More details at our website: bit.ly/regent-research
22.02.2025 18:55 β π 2 π 0 π¬ 0 π 0@kaustubhsridhar.bsky.social
Building generalist agents. Final-year PhD student @UPenn, Prev: @Amazon @IITBombay http://kaustubhsridhar.github.io/
REGENT will be presented as an Oral at ICLR 2025 in Singapore πΈπ¬, given to the top 1.8% of 11672 submissions! More details at our website: bit.ly/regent-research
22.02.2025 18:55 β π 2 π 0 π¬ 0 π 0Is scaling current agent architectures the most effective way to build generalist agents that can rapidly adapt?
Introducing πREGENTπ, a generalist agent that can generalize to unseen robotics tasks and games via retrieval-augmentation and in-context learning.
Bsky doesnβt want you to see the awesome gifs! Find them on our website: bit.ly/regent-research
14.12.2024 22:26 β π 0 π 0 π¬ 0 π 0This whole project would not have been possible without Souradeep Dutta, Dinesh Jayaraman, and Insup Lee.
We have many more results, ablations, code, dataset, model, and the paper at our website: bit.ly/regent-research
The arxiv link: arxiv.org/abs/2412.04759
REGENT is far from perfect.
It cannot generalize to new embodiments (unseen mujoco envs) or long-horizon envs (like spaceinvaders & stargunner). It cannot generalize to completely new suites (i.e. requires similarities between pre-training and unseen envs).
Few failed rollouts:
Here is a qualitative visualization of deploying REGENT in the unseen atari-pong environment.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0While REGENTβs design choices are aimed at generalization, its gains are not limited to unseen environments: it even performs better than current generalist agents when deployed within the pre-training environments.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0In the four unseen ProcGen environments, REGENT also outperforms the only other generalist agent, MTT, that can generalize to unseen environments via in-context learning. REGENT does so with an OOM less pretraining data and 1/3rd the number of params.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0REGENT also outperforms the βAll Dataβ variants of JAT/Gato which were pre-trained on 5-10x the amount of data.
For context, the Multi-Game DT uses 1M states to finetune to new atari envs. REGENT generalizes via RAG from ~10k states. REGENT Finetuned further improves over REGENT
In the unseen metaworld & atari envs in the Gato setting, REGENT and R&P outperform SOTA generalist agents like JAT/Gato (the open source reproduction of Gato). REGENT outperforms JAT/Gato even after JAT/Gato is finetuned on data from the unseen envs.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0We also evaluate on unseen levels and unseen environments in the ProcGen setting.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0We evaluate REGENT on unseen robotics and game environments in the Gato setting.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0REGENT has a few key ingredients, including an interpolation between R&P and the transformer. This allows the transformer to more readily generalize to unseen envs, since it is given the easier task of predicting the residual to the R&P action rather than the complete action.
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0R&P simply picks the nearest retrieved state sβ² to the query state st, and plays the corresponding action aβ.
REGENT retrieves the 19 closest states, throws the corresponding (s, r, a) tuples in the context with query (st, rt-1), and acts via in-context learning in unseen envs.
Inspired by RAG and the success of a simple retrieval-based 1-nearest neighbor baseline that we call Retrieve-and-Play (R&P),
REGENT pretrains a transformer policy whose inputs are not just the query state st and previous reward rt-1, but also retrieved tuples of (state, previous reward, action).
REGENT is pretrained on data from many training envs (left). REGENT is then deployed on the held-out envs (right) with a few demos from which it can retrieve states, rewards, and actions to use for in-context learning. **It never finetunes on the demos in the held-out envs.**
14.12.2024 21:49 β π 0 π 0 π¬ 1 π 0Is scaling current agent architectures the most effective way to build generalist agents that can rapidly adapt?
Introducing πREGENTπ, a generalist agent that can generalize to unseen robotics tasks and games via retrieval-augmentation and in-context learning.
Bluesky doesn't want you to see these gifs! :) Please see the rollouts in unseen environments in our website: bit.ly/regent-research
14.12.2024 19:43 β π 0 π 0 π¬ 0 π 0We are also presenting REGENT in the Adaptive Foundation Models (today afternoon, Saturday Dec 14) and Open World Agents (tomorrow afternoon, Sunday Dec 15) workshops in NeurIPS. Please come by if youβd like to hear more!
14.12.2024 19:39 β π 0 π 0 π¬ 1 π 0This whole project would not have been possible without Souradeep Dutta, Dinesh Jayaraman, and Insup Lee.
We have many more results, ablations, code, dataset, model, and the paper at our website: bit.ly/regent-research
The arxiv link: arxiv.org/abs/2412.04759
REGENT is far from perfect.
It cannot generalize to new embodiments (unseen mujoco envs) or long-horizon envs (like spaceinvaders & stargunner). It cannot generalize to completely new suites (i.e. requires similarities between pre-training and unseen envs).
Few failed rollouts:
Cool demo of Gemini 2.0 Flash's new streaming API, by @simonwillison.net.
www.youtube.com/watch?v=mpgW...
Vancouver is so beautiful!
10.12.2024 19:28 β π 1 π 0 π¬ 0 π 0What would deep thought cost for the ultimate question? bsky.app/profile/nato...
05.12.2024 17:27 β π 0 π 0 π¬ 0 π 0What's missing to get to deep thought? :D
05.12.2024 17:25 β π 0 π 0 π¬ 1 π 0In the hitchhikers guide to the galaxy, when they built a huge computer (Deep Thought) to answer the ultimate question (of Life, the Universe and Everything), and it took 7.5 million years, it seems like they clearly did both train-time and test-time scaling.
05.12.2024 17:24 β π 0 π 0 π¬ 1 π 0Can no longer tell if LLMs are sounding like humans or some humans have always sounded like LLMs
04.12.2024 01:45 β π 31 π 1 π¬ 2 π 0I'd like to introduce what I've been working at @hellorobot.bsky.social: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration.
Check it out: github.com/hello-robot/...
Thread ->
I'm still waiting for the "react/respond to the author rebuttal" from a couple of reviewers :_(
30.11.2024 17:15 β π 0 π 0 π¬ 0 π 0Oh damn haha. Thank you for the info
29.11.2024 00:38 β π 0 π 0 π¬ 0 π 0