Marlos C. Machado

Marlos C. Machado

@marloscmachado.bsky.social

Assistant Professor at the University of Alberta. Amii Fellow, Canada CIFAR AI chair. Machine learning researcher. All things reinforcement learning. ๐Ÿ“ Edmonton, Canada ๐Ÿ‡จ๐Ÿ‡ฆ ๐Ÿ”— https://webdocs.cs.ualberta.ca/~machado/ ๐Ÿ—“๏ธ Joined November, 2024

869 Followers 221 Following 50 Posts Joined Nov 2024
3 months ago
Government of Canada launches new initiative to recruit world-leading researchers - Canada.ca Canada will invest $1.7 billion to attract top global talent

"Canada Impact+ Research Chairs programโ€”a new $1 billion investment that will provide Canadian institutions the opportunity to recruit top-tier international researchers with expertise in key areas ..."

www.canada.ca/en/innovatio...

2 0 0 0
3 months ago

Thrilled to start 2026 as faculty in Psych & CS
@ualberta.bsky.social + Amii.ca Fellow! ๐Ÿฅณ Recruiting students to develop theories of cognition in natural & artificial systems ๐Ÿค–๐Ÿ’ญ๐Ÿง . Find me at #NeurIPS2025 workshops (speaking coginterp.github.io/neurips2025 & organising @dataonbrainmind.bsky.social)

103 27 4 1
3 months ago
Post image Post image Post image Post image

The Computing Science Dept. at the University of Alberta has multiple faculty job openings. Please share this broadly. We have a great environment!

- CS Theory: tinyurl.com/zrh9mk69
- Network/Cyber Security: tinyurl.com/renxazzy
- Robotics/CV/Graphics: tinyurl.com/ypcsfbff

9 2 0 0
3 months ago

The Department of Computing Science at the University of Alberta at the University of Alberta has an opening for another tenure-track faculty in robotics. Please, spread the word.

I can attest to how awesome our department and @amiithinks.bsky.social are!

(Official job posting coming soon.)

3 0 0 0
5 months ago
Post image

Ratatouille (2007)

6 0 0 0
5 months ago

This paper has now been accepted @neuripsconf.bsky.social !

Huge congratulations, Hon Tik (Rick) Tse and Siddarth Chandrasekar.

8 3 0 0
6 months ago

2/2: โ€œConquerors live in dread of the day when they are shown to be, not superior, but simply lucky.โ€

โ€• N.K. Jemisin, The Stone Sky

2 0 0 0
6 months ago

1/2: But there are none so frightened, or so strange in their fear, as conquerors. They conjure phantoms endlessly, terrified that their victims will someday do back what was done to themโ€”even if, in truth, their victims couldnโ€™t care less about such pettiness and have moved on.โ€

2 1 1 0
7 months ago
RLC 2025 - Outstanding Paper Awards

Excited to announce the RLC best paper awards! Like last year, we wanted to highlight the many excellent ways you can do research.
rl-conference.cc/RLC2025Award...

10 6 0 1
7 months ago

* RLC Journal to Conference Track:*
(Originally published at TMLR)

- Deep RL track (Thu): AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning by S. Pramanik

1 0 0 0
7 months ago

* RLC Full Papers:*
(These are great papers!)

- Deep RL track (Thu): Deep Reinforcement Learning with Gradient Eligibility Traces by E. Elelimy
- Foundations track (Fri): An Analysis of Action-Value Temporal-Difference Methods That Learn State Values by B. Daley and P. Nagarajan

1 0 1 0
7 months ago

* RLC Workshop Papers (2/2):*
Inductive Biases in RL
sites.google.com/view/ibrl-wo...

- A Study of Value-Aware Eigenoptions by H. Kotamreddy

0 0 1 0
7 months ago
Workshop on Reinforcement Learning Beyond Rewards: Ingredients for Developing Generalist Agents

* RLC Workshop Papers (1/2):*
RL Beyond Rewards
rlbrew2-workshop.github.io

- Tue 11:59 (spotlight talk): Towards An Option Basis To Optimize All Rewards by S. Chandrasekar
- The World Is Bigger: A Computationally-Embedded Perspective on the Big World Hypothesis by A. Lewandowsi

0 0 1 0
7 months ago

Here's what our group will be presenting at RLC'25.

* Invited Talks at Workshops:*
Tue 10:00: The Causal RL Workshop sites.google.com/uci.edu/crlw...
Tue 14:30: Inductive Biases in RL (IBRL) Workshop
sites.google.com/view/ibrl-wo...
Tue 15:00: Panel Discussion at IBRL Workshop

0 0 1 0
7 months ago
Post image

RLC starts tomorrow here in Edmonton. I couldn't be more excited! It has a fantastic roll of speakers, great papers, and workshops. And this time, it is in Edmonton ๐Ÿ˜

@rl-conference.bsky.social is my favourite conference, and no, it is not because I am one of its organizers this year.

12 3 0 0
8 months ago

This was a great long-term effort from @martinklissarov.bsky.social, Akhil Bagaria, and @ray-luo.bsky.social, and it led to a great overview of the ideas behind leveraging temporal abstractions in AI.

If anything, I think this is very useful resource for anyone interested in this field!

6 1 0 0
9 months ago

To align better with workshop acceptance dates, ๐‘๐‹๐‚ ๐ข๐ฌ ๐ž๐ฑ๐ญ๐ž๐ง๐๐ข๐ง๐  ๐ข๐ญ๐ฌ ๐ž๐š๐ซ๐ฅ๐ฒ ๐ซ๐ž๐ ๐ข๐ฌ๐ญ๐ซ๐š๐ญ๐ข๐จ๐ง ๐๐ž๐š๐๐ฅ๐ข๐ง๐ž ๐ญ๐จ ๐‰๐ฎ๐ง๐ž ๐Ÿ๐Ÿ‘๐ซ๐!

8 3 0 1
9 months ago
Post image

9/9: I genuinely think AgarCL might unlock new research avenues in CRL, including loss of plasticity, exploration, representation learning, and more. I do hope you consider using it.

Repo: github.com/machado-rese...
Website: agarcl.github.io
Preprint: arxiv.org/abs/2505.18347

6 0 0 0
9 months ago
Post image Post image

8/9: Well, if you are still interested, you should probably consider reading the paper, but it is interesting to see that most of the agents we considered were able to reach human-level performance only in the most benign settings. And we did use a lot of computing here!

4 1 1 0
9 months ago
Post image

7/9: Through mini-games, we tried to quantify and isolate some of the challenges AgarCL poses, including partial observability, non-stationarity, exploration, hyperparameter tuning, and the non-episodic nature of the environment (so easy to forget!). Where do our agents "break"?

1 0 1 0
9 months ago
Post image

6/9: Importantly, this is a challenge problem that forces us to deal with many problems we often avoid, such as hyperparameter sweeps and exploration in CRL.

It is perhaps no surprise that the classic algorithms we considered couldn't really make much progress in the full game.

2 0 1 0
9 months ago
Post image

5/9: Over time, even the agent's observation will change, as the camera needs to zoom out to accommodate more agents; not to mention that there are other agents in the environment. I'm very excited about AgarCL because I think it allows us to ask questions we couldn't before.

2 0 1 0
9 months ago
Post image

4/9: AgarCL is an adaptation of agar.io, a game with simple mechanics that lead to complex interactions. It's non-episodic, and a key aspect is that the agent dynamics change as it accumulates mass: It becomes slower, gains new affordances, sheds more mass, etc.

3 0 1 1
9 months ago
Post image

3/9: AgarCL is our attempt at an environment with the complexity of a "big world" but in a smooth way, where the "laws of physics" don't change. It has complex dynamics, is partially observable, with non-stationarity, pixel-based observations, and a hybrid action space.

3 0 1 0
9 months ago
Post image

2/9: CRL is often motivated by the idea that the world is bigger than the agent, requiring tracking. We usually simulate this with non-stationarity by cycling through classic episodic problems. I've written papers like this, but it feels too artificial.

arxiv.org/abs/2303.07507

3 0 1 0
9 months ago
Post image

๐Ÿ“ข I'm very excited to release AgarCL, a new evaluation platform for research in continual reinforcement learningโ€ผ๏ธ

Repo: github.com/machado-rese...
Website: agarcl.github.io
Preprint: arxiv.org/abs/2505.18347

Details below ๐Ÿ‘‡

29 8 1 0
9 months ago

This is great, thanks for sharing! We will read your paper carefully.

1 0 0 0
9 months ago
Preview
Reward-Aware Proto-Representations in Reinforcement Learning In recent years, the successor representation (SR) has attracted increasing attention in reinforcement learning (RL), and it has been used to address some of its key challenges, such as exploration, c...

7/7: We just scratched the surface here, but I think this could be the beginning of something interesting; that might be relevant to research questions ranging from safety in RL all the way to cognitive sciences.

Again, here's the preprint by Tse et al.: arxiv.org/abs/2505.16217

9 0 0 0
9 months ago
Post image

6/7: We also show that, when compared to the SR, the DR gives rise to qualitatively different behavior in all sorts of tasks, such as reward shaping, exploration, & option discovery. Similar to what we did w/ STOMP, sometimes there's value in being aware of the reward function ๐Ÿ˜

4 0 1 0
9 months ago
Post image Post image Post image

5/7: What we do is to lay some of the theoretical foundation underlying the DR, including establishing some general TD learning and dynamic programming updates, connecting the DR to the SR, and extending the DR to the FA setting, similar to how SFs do it for the SR.

4 0 1 0