Pablo Samuel Castro's Avatar

Pablo Samuel Castro

@pcastr.bsky.social

SeΓ±or swesearcher @ Google DeepMind, adjunct prof at UniversitΓ© de MontrΓ©al and Mila. Musician. From πŸ‡ͺπŸ‡¨ living in πŸ‡¨πŸ‡¦. https://psc-g.github.io/

3,492 Followers  |  341 Following  |  333 Posts  |  Joined: 19.11.2024  |  2.2679

Latest posts by pcastr.bsky.social on Bluesky

Preview
The Formalism-Implementation Gap in Reinforcement Learning Research The last decade has seen an upswing in interest and adoption of reinforcement learning (RL) techniques, in large part due to its demonstrated capabilities at performing certain tasks at "super-human l...

Read the full paper here:
arxiv.org/abs/2510.16175

(I'll make a blog post soon, my webpage is quite out of date...)

18/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In summary, we should approach benchmarks like the ALE for "insight-oriented exploratory research" (Herrmann et al., 2024) and scientific testing (Jordan et al., 2024).
Advancing RL science in this manner could lead to the next big breakthrough...

17/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Indeed, over-indexing on a single flavor of research, whether it be LLMs or reinforcement learning from human feedback (RLHF), has an opportunity cost in terms of potential, and unanticipated, breakthroughs.

16/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In an era of LLM-obsession, RL research on video games might seem antiquated or irrelevant. I often see this in reviews: "the ALE is solved", "...not interesting", etc.
I refute that success of LLMs rests on decades of academic research not premised on this application.

15/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What makes a good benchmark?
I argue they should be:
πŸ”΅ well-understood
πŸ”΅ diverse & without experimenter-bias
πŸ”΅ naturally extendable
Under this lens, the ALE is still a useful benchmark for RL research, *when used properly* (i.e. to advance science, rather than "winning").
14/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Aggregate results?
When reporting aggregate performance (like IQM), the choice of games subset can have a huge impact on algo comparisons (see figure below)!
We're missing the trees for the forest!🌳

πŸ›‘Stop focusing on aggregate results, & opt for per-game analyses!πŸ›‘

13/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Train/Eval env sets?
When ALE was introduced, Bellemare et al. recommended using 5 games for hparam tuning, & a separate set of games for eval.
This practice is no longer common, and people often use the same set of games for train/eval.
If possible, make them disjoint!

12/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Experiment length?
While 200M env frames was the standard set by Mnih et al, now there's a wide variety of lengths used (100k, 500k, 10M, 40M, etc.). In arxiv.org/abs/2406.17523 we showed exp length can have a huge impact on conclusions drawn (see first image).

11/X

28.10.2025 13:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Discount factor?
Ι£ is central to training RL agents, but on the ALE we report undiscounted returns:
πŸ‘‰πŸΎWe're evaluating algo's using a different objective than what they were trained for!πŸ‘ˆπŸΎ
To avoid ambiguity, we should report {Ι£_train} and Ι£_eval.

10/X

28.10.2025 13:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Let's move to measuring progress & evaluating methods. Lots of great literature on this already, but there a few points I make in the paper which I think are worth highlighting.
We'll use the ALE as an illustrative example.
tl;dr: be more explicit about evaluation process!

9/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Environment dynamics?
Given that we're dealing with a POMDP, state transitions are between Atari RAM states, & observations are affected by all the software wrappers.
Design decisions like whether end-of-life means end-of-episode affects transition dynamics & performance!

8/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Reward function?
Since reward magnitudes vary a lot across games, Mnih et al. (2015) clipped rewards at [-1, 1] to be able to have global hparams. This can result in aliased rewards, which increases the partial observability of the system!

7/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Initial state distribution?

Since Atari games are deterministic, Mnih et al. (2015) added a series of no-op actions for non-determinancy. This is more interesting, but does move away from the original Atari games, thus part of the formalism-implementation gap.

6/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What about actions?
In the ALE you can use a "minimal set" or the full set, & you see both being used in the literature.
This choice matters a ton, but you don't always see it stated explicitly!

5/X

28.10.2025 13:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What's the MDP state space? Atari frames?
Nope, single Atari frames are not Markovian => for Markovian policies, design choices like frame skipping/stacking & max-pooling were taken.

*This means we're dealing with a POMDP!*
And these choices matter a ton (see image below)!

4/X

28.10.2025 13:55 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Let's take the ALE and try to be explicit about this mapping.

Stella is the emulator of the Atari 2600, but we use the ALE as a wrapper around it, which comes with its own design decisions.
But typically we interact with something like Gymnasium/CleanRL on top of that.

3/X

28.10.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Most RL papers include "This is an MDP..." formalisms, maybe prove a theorem or two, and then evaluate on some benchmark like the ALE.
However, we almost never *explicitly* map our MDP formalism to the envs we evaluate on! This creates a *formalism-implementation gap*!

2/X

28.10.2025 13:55 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨The Formalism-Implementation Gap in RL research🚨

Lots of progress in RL research over last 10 years, but too much performance-driven => overfitting to benchmarks (like the ALE).

1⃣ Let's advance science of RL
2⃣ Let's be explicit about how benchmarks map to formalism

1/X

28.10.2025 13:55 β€” πŸ‘ 42    πŸ” 4    πŸ’¬ 1    πŸ“Œ 2

Proud to announce that Meta-World+ was accepted to NeurIPs, Datasets and Benchmarks! Meta-World is a common benchmark for multi-task and meta-RL research! However, it was very difficult to do effective science with Meta-World as different versions produce different results.

19.09.2025 23:21 β€” πŸ‘ 18    πŸ” 3    πŸ’¬ 1    πŸ“Œ 3
Preview
Simplicial Embeddings Improve Sample Efficiency in Actor-Critic Agents Recent works have proposed accelerating the wall-clock training time of actor-critic methods via the use of large-scale environment parallelization; unfortunately, these can sometimes still require la...

This work was led by @johanobandoc.bsky.social and @waltermayor.bsky.social , with @lavoiems.bsky.social , Scott Fujimoto and Aaron Courville.

Read the paper at arxiv.org/abs/2510.13704

11/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

The effectiveness of SEMs demonstrate that RL training w/ sparse & structured representations can yield good performance. We're excited to explore value-based, multi-objective, & scaled-up architectures.
Try integrating SEMs and let us know what you find!

10/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The gains don’t stop there!
We evaluated Flow Q-Learning in offline-to-online to online training as well as FastTD3 on multitask settings, and observe gains throughout.

9/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Is SEM only effective with TD3-style agents and/or on HumanoidBench?
No!
We evaluate on FastTD3-SimBaV2 and FastSAC on HumanoidBench, FastTD3 on Booster T1, as well as PPO on Atari-10 and IsaacGym and observe gains in all these settings.

8/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Are our results specific to the design choices behind FastTD3?
No!
We ablate FastTD3 components and vary some of the training configurations and find that the addition of SEM results in improvements under all these settings, suggesting its benefits are general.

7/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We find SEM to be the most effective when compared against alternative approaches for structuring representations. We also evaluated the impact of the choices of V and L, finding that higher V with lower L seems to yield the best results.

6/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Why do SEMs improve performance?
Our analyses show they increase effective rank of actor features while bounding their norms, have reduced losses, more consistency across critics, & sparser representations, ultimately resulting in improved performance and sample efficiency.

5/X

20.10.2025 14:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We use FastTD3 (arxiv.org/abs/2505.22642), a recent baseline for continuous control which is performant and fast, to evaluate the benefits of SEM. As seen in the second figure, SEMs yield improvements throughout.

4/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We first show that non-stationarity amplifies representation collapse by evaluating on a CIFAR-10 experiment where we create non-stationarity by randomly shuffling labels every 20 epochs. Using SEMs helps avoid collapse, reduces neuron dormancy, and results in lower loss.

3/X

20.10.2025 14:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

We take the view that discrete & sparse representations are stabler, more robust to noise, and more interpretable. SEMs are differentiable modules which partition latent representations into L simplices of size V.

2/X

20.10.2025 14:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ”ŠSimplicial Embeddings (SEMs) Improve Sample Efficiency in Actor-Critic AgentsπŸ”Š

In our recent preprint we demonstrate that the use of well-structured representations (SEMs) can dramatically improve sample efficiency in RL agents.

1/X

20.10.2025 14:06 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

@pcastr is following 20 prominent accounts