Read the full paper here:
arxiv.org/abs/2510.16175
(I'll make a blog post soon, my webpage is quite out of date...)
18/X
@pcastr.bsky.social
SeΓ±or swesearcher @ Google DeepMind, adjunct prof at UniversitΓ© de MontrΓ©al and Mila. Musician. From πͺπ¨ living in π¨π¦. https://psc-g.github.io/
Read the full paper here:
arxiv.org/abs/2510.16175
(I'll make a blog post soon, my webpage is quite out of date...)
18/X
In summary, we should approach benchmarks like the ALE for "insight-oriented exploratory research" (Herrmann et al., 2024) and scientific testing (Jordan et al., 2024).
Advancing RL science in this manner could lead to the next big breakthrough...
17/X
Indeed, over-indexing on a single flavor of research, whether it be LLMs or reinforcement learning from human feedback (RLHF), has an opportunity cost in terms of potential, and unanticipated, breakthroughs.
16/X
In an era of LLM-obsession, RL research on video games might seem antiquated or irrelevant. I often see this in reviews: "the ALE is solved", "...not interesting", etc.
I refute that success of LLMs rests on decades of academic research not premised on this application.
15/X
What makes a good benchmark?
I argue they should be:
π΅ well-understood
π΅ diverse & without experimenter-bias
π΅ naturally extendable
Under this lens, the ALE is still a useful benchmark for RL research, *when used properly* (i.e. to advance science, rather than "winning").
14/X
Aggregate results?
When reporting aggregate performance (like IQM), the choice of games subset can have a huge impact on algo comparisons (see figure below)!
We're missing the trees for the forest!π³
πStop focusing on aggregate results, & opt for per-game analyses!π
13/X
Train/Eval env sets?
When ALE was introduced, Bellemare et al. recommended using 5 games for hparam tuning, & a separate set of games for eval.
This practice is no longer common, and people often use the same set of games for train/eval.
If possible, make them disjoint!
12/X
Experiment length?
While 200M env frames was the standard set by Mnih et al, now there's a wide variety of lengths used (100k, 500k, 10M, 40M, etc.). In arxiv.org/abs/2406.17523 we showed exp length can have a huge impact on conclusions drawn (see first image).
11/X
Discount factor?
Ι£ is central to training RL agents, but on the ALE we report undiscounted returns:
ππΎWe're evaluating algo's using a different objective than what they were trained for!ππΎ
To avoid ambiguity, we should report {Ι£_train} and Ι£_eval.
10/X
Let's move to measuring progress & evaluating methods. Lots of great literature on this already, but there a few points I make in the paper which I think are worth highlighting.
We'll use the ALE as an illustrative example.
tl;dr: be more explicit about evaluation process!
9/X
Environment dynamics?
Given that we're dealing with a POMDP, state transitions are between Atari RAM states, & observations are affected by all the software wrappers.
Design decisions like whether end-of-life means end-of-episode affects transition dynamics & performance!
8/X
Reward function?
Since reward magnitudes vary a lot across games, Mnih et al. (2015) clipped rewards at [-1, 1] to be able to have global hparams. This can result in aliased rewards, which increases the partial observability of the system!
7/X
Initial state distribution?
Since Atari games are deterministic, Mnih et al. (2015) added a series of no-op actions for non-determinancy. This is more interesting, but does move away from the original Atari games, thus part of the formalism-implementation gap.
6/X
What about actions?
In the ALE you can use a "minimal set" or the full set, & you see both being used in the literature.
This choice matters a ton, but you don't always see it stated explicitly!
5/X
What's the MDP state space? Atari frames?
Nope, single Atari frames are not Markovian => for Markovian policies, design choices like frame skipping/stacking & max-pooling were taken.
*This means we're dealing with a POMDP!*
And these choices matter a ton (see image below)!
4/X
Let's take the ALE and try to be explicit about this mapping.
Stella is the emulator of the Atari 2600, but we use the ALE as a wrapper around it, which comes with its own design decisions.
But typically we interact with something like Gymnasium/CleanRL on top of that.
3/X
Most RL papers include "This is an MDP..." formalisms, maybe prove a theorem or two, and then evaluate on some benchmark like the ALE.
However, we almost never *explicitly* map our MDP formalism to the envs we evaluate on! This creates a *formalism-implementation gap*!
2/X
π¨The Formalism-Implementation Gap in RL researchπ¨
Lots of progress in RL research over last 10 years, but too much performance-driven => overfitting to benchmarks (like the ALE).
1β£ Let's advance science of RL
2β£ Let's be explicit about how benchmarks map to formalism
1/X
Proud to announce that Meta-World+ was accepted to NeurIPs, Datasets and Benchmarks! Meta-World is a common benchmark for multi-task and meta-RL research! However, it was very difficult to do effective science with Meta-World as different versions produce different results.
19.09.2025 23:21 β π 18 π 3 π¬ 1 π 3This work was led by @johanobandoc.bsky.social and @waltermayor.bsky.social , with @lavoiems.bsky.social , Scott Fujimoto and Aaron Courville.
Read the paper at arxiv.org/abs/2510.13704
11/X
The effectiveness of SEMs demonstrate that RL training w/ sparse & structured representations can yield good performance. We're excited to explore value-based, multi-objective, & scaled-up architectures.
Try integrating SEMs and let us know what you find!
10/X
The gains donβt stop there!
We evaluated Flow Q-Learning in offline-to-online to online training as well as FastTD3 on multitask settings, and observe gains throughout.
9/X
Is SEM only effective with TD3-style agents and/or on HumanoidBench?
No!
We evaluate on FastTD3-SimBaV2 and FastSAC on HumanoidBench, FastTD3 on Booster T1, as well as PPO on Atari-10 and IsaacGym and observe gains in all these settings.
8/X
Are our results specific to the design choices behind FastTD3?
No!
We ablate FastTD3 components and vary some of the training configurations and find that the addition of SEM results in improvements under all these settings, suggesting its benefits are general.
7/X
We find SEM to be the most effective when compared against alternative approaches for structuring representations. We also evaluated the impact of the choices of V and L, finding that higher V with lower L seems to yield the best results.
6/X
Why do SEMs improve performance?
Our analyses show they increase effective rank of actor features while bounding their norms, have reduced losses, more consistency across critics, & sparser representations, ultimately resulting in improved performance and sample efficiency.
5/X
We use FastTD3 (arxiv.org/abs/2505.22642), a recent baseline for continuous control which is performant and fast, to evaluate the benefits of SEM. As seen in the second figure, SEMs yield improvements throughout.
4/X
We first show that non-stationarity amplifies representation collapse by evaluating on a CIFAR-10 experiment where we create non-stationarity by randomly shuffling labels every 20 epochs. Using SEMs helps avoid collapse, reduces neuron dormancy, and results in lower loss.
3/X
We take the view that discrete & sparse representations are stabler, more robust to noise, and more interpretable. SEMs are differentiable modules which partition latent representations into L simplices of size V.
2/X
πSimplicial Embeddings (SEMs) Improve Sample Efficiency in Actor-Critic Agentsπ
In our recent preprint we demonstrate that the use of well-structured representations (SEMs) can dramatically improve sample efficiency in RL agents.
1/X