Vivek Myers's Avatar

Vivek Myers

@vivekmyers.bsky.social

PhD student @Berkeley_AI reinforcement learning, AI, robotics

127 Followers  |  68 Following  |  24 Posts  |  Joined: 06.11.2024  |  2.6707

Latest posts by vivekmyers.bsky.social on Bluesky

NeurIPS 2025 Workshop DBM Welcome to the OpenReview homepage for NeurIPS 2025 Workshop DBM

๐Ÿšจ Deadline Extended ๐Ÿšจ
The submission deadline for the Data on the Brain & Mind Workshop (NeurIPS 2025) has been extended to Sep 8 (AoE)! ๐Ÿง โœจ
We invite you to submit your findings or tutorials via the OpenReview portal:
openreview.net/group?id=Neu...

27.08.2025 19:45 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Data on the Brain & Mind

๐Ÿ“ข 10 days left to submit to the Data on the Brain & Mind Workshop at #NeurIPS2025!

๐Ÿ“ Call for:
โ€ข Findings (4 or 8 pages)
โ€ข Tutorials

If youโ€™re submitting to ICLR or NeurIPS, consider submitting here tooโ€”and highlight how to use a cog neuro dataset in our tutorial track!
๐Ÿ”— data-brain-mind.github.io

25.08.2025 15:43 โ€” ๐Ÿ‘ 8    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿšจ Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind

๐Ÿ“ฃ Call for: Findings (4- or 8-page) + Tutorials tracks

๐ŸŽ™๏ธ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social

๐ŸŒ Learn more: data-brain-mind.github.io

04.08.2025 15:28 โ€” ๐Ÿ‘ 31    ๐Ÿ” 10    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 3
Preview
Language Models in Plato's Cave Why language models succeeded where video models failed, and what that teaches us about AI

This is an excellent and very clear piece from Sergey Levine about the strengths and limitations of Large Language models.
sergeylevine.substack.com/p/language-m...

12.06.2025 16:30 โ€” ๐Ÿ‘ 41    ๐Ÿ” 11    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

Normalizing Flows (NFs) check all boxes for RL: exact likelihoods (imitation learning), efficient sampling (real-time control), and variational inference (Q-learning)! Yet they are overlooked over more expensive and less flexible contemporaries like diffusion models.

Are NFs fundamentally limited?

05.06.2025 17:05 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

How can agents trained to reach (temporally) nearby goals generalize to attain distant goals?

Come to our #ICLR2025 poster now to discuss ๐˜ฉ๐˜ฐ๐˜ณ๐˜ช๐˜ป๐˜ฐ๐˜ฏ ๐˜จ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜ข๐˜ญ๐˜ช๐˜ป๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ!

w/ @crji.bsky.social and @ben-eysenbach.bsky.social

๐Ÿ“Hall 3 + Hall 2B #637

26.04.2025 02:12 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐ŸšจOur new #ICLR2025 paper presents a unified framework for intrinsic motivation and reward shaping: they signal the value of the RL agentโ€™s state๐Ÿค–=external state๐ŸŒŽ+past experience๐Ÿง . Rewards based on potentials over the learning agentโ€™s state provably avoid reward hacking!๐Ÿงต

26.03.2025 00:05 โ€” ๐Ÿ‘ 10    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following Temporal Representation Alignment

Thanks to incredible collaborators Bill Zheng, Anca Dragan, Kuan Fang, and Sergey Levine!

Website: tra-paper.github.io
Paper: arxiv.org/pdf/2502.05454

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

...but to create truly autonomous self-improving agents, we must not only imitate, but also ๐˜ช๐˜ฎ๐˜ฑ๐˜ณ๐˜ฐ๐˜ท๐˜ฆ upon the training capabilities. Our findings suggest that this improvement might emerge from better task representations, rather than more complex learning algorithms. 7/

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐˜ž๐˜ฉ๐˜บ ๐˜ฅ๐˜ฐ๐˜ฆ๐˜ด ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ฎ๐˜ข๐˜ต๐˜ต๐˜ฆ๐˜ณ? Recent breakthroughs in both end-to-end robot learning and language modeling have been enabled not through complex TD-based reinforcement learning objectives, but rather through scaling imitation with large architectures and datasets... 6/

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We validated this in simulation. Across offline RL benchmarks, imitation using our TRA task representations outperformed standard behavioral cloning-especially for stitching tasks. In many cases, TRA beat "true" value-based offline RL, using only an imitation loss. 5/

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Successor features have long been known to boost RL generalization (Dayan, 1993). Our findings suggest something stronger: successor task representations produce emergent capabilities beyond training even without RL or explicit subtask decomposition. 4/

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

This trick encourages a form of time invariance during learning: both nearby and distant goals are represented similarly. By additionally aligning language instructions ๐œ‰(โ„“) to the goal representations ๐œ“(๐‘”), the policy can also perform new compound language tasks. 3/

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

What does temporal alignment mean? When training, our policy imitates the human actions that lead to the end goal ๐‘” of a trajectory. Rather than training on the raw goals, we use a representation ๐œ“(๐‘”) that aligns with the preceding state โ€œsuccessor featuresโ€ ๐œ™(๐‘ ). 2/

14.02.2025 01:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Current robot learning methods are good at imitating tasks seen during training, but struggle to compose behaviors in new ways. When training imitation policies, we found something surprisingโ€”using temporally-aligned task representations enabled compositional generalization. 1/

14.02.2025 01:39 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Excited to share new work led by @vivekmyers.bsky.social and @crji.bsky.social that proves you can learn to reach distant goals by solely training on nearby goals. The key idea is a new form of invariance. This invariance implies generalization w.r.t. the horizon.

06.02.2025 01:13 โ€” ๐Ÿ‘ 13    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Want to see an agent carry out long horizons tasks when only trained on short horizon trajectories?

We formalize and demonstrate this notion of *horizon generalization* in RL.

Check out our website! horizon-generalization.github.io

04.02.2025 20:50 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Horizon Generalization in Reinforcement Learning We study goal-conditioned RL through the lens of generalization, but not in the traditional sense of random augmentations and domain randomization. Rather, we aim to learn goal-directed policies that ...

With wonderful collaborators @crji.bsky.social, @ben-eysenbach.bsky.social !
Paper: arxiv.org/abs/2501.02709
Website: horizon-generalization.github.io
Code: github.com/vivekmyers/h...

04.02.2025 20:37 โ€” ๐Ÿ‘ 9    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What does this mean in practice? To generalize to long-horizon goal-reaching behavior, we should consider how our GCRL algorithms and architectures enable invariance to planning. When possible, prefer architectures like quasimetric networks (MRN, IQE) that enforce this invariance. 6/

04.02.2025 20:37 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Empirical results support this theory. The degree of planning invariance and horizon generalization is correlated across environments and GCRL methods. Critics parameterized as a quasimetric distance indeed tend to generalize the most over horizon. 5/

04.02.2025 20:37 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Similar to how CNN architectures exploit the inductive bias of translation-invariance for image classification, RL policies can enforce planning invariance by using a *quasimetric* critic parameterization that is guaranteed to obey the triangle inequality. 4/

04.02.2025 20:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image Post image

The key to achieving horizon generalization is *planning invariance*. A policy is planning invariant if decomposing tasks into simpler subtasks doesn't improve performance. We prove planning invariance can enable horizon generalization. 3/

04.02.2025 20:37 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Certain RL algorithms are more conducive to horizon generalization than others. Goal-conditioned (GCRL) methods with a bilinear critic ฯ•(๐‘ )แต€ฯˆ(๐‘”) as well as quasimetric methods better-enable horizon generalization. 2/

04.02.2025 20:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Reinforcement learning agents should be able to improve upon behaviors seen during training.
In practice, RL agents often struggle to generalize to new long-horizon behaviors.
Our new paper studies *horizon generalization*, the degree to which RL algorithms generalize to reaching distant goals. 1/

04.02.2025 20:37 โ€” ๐Ÿ‘ 34    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
Learning to Assist Humans without Inferring Rewards

Website: empowering-humans.github.io
Paper: arxiv.org/abs/2411.02623

Many thanks to wonderful collaborators Evan Ellis, Sergey Levine, Benjamin Eysenbach, and Anca Dragan!

22.01.2025 02:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Effective empowerment could also be combined with other objectives (e.g., RLHF), to improve assistance and promote safety (prevent human disempowerment). 6/

22.01.2025 02:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In principle, this approach provides a general way to align RL agents from human interactions without needing human feedback or other rewards. 5/

22.01.2025 02:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We show that optimizing this human effective empowerment helps in assistive settings. Theoretically, we show that maximizing the effective empowerment optimizes an (average-case) lower bound the human's utility/reward/objective under a uninformative prior. 4/

22.01.2025 02:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our recent paper, "Learning to Assist Humans Without Inferring Rewards," proposes a scalable contrastive estimator for human empowerment. The estimator learns successor features to model the effects of a human's actions on the environment, approximating the *effective empowerment*. 3/

22.01.2025 02:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

This distinction is subtle but important. An agent that maximizes a misspecified model of the human's reward or seeks power for itself can lead to arbitrarily bad outcomes where the human becomes disempowered. Maximizing human empowerment avoids this. 2/

22.01.2025 02:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@vivekmyers is following 20 prominent accounts