Cool setup
30.10.2025 13:13 β π 0 π 0 π¬ 0 π 0@jaydenteoh.bsky.social
Undergraduate researcher. Interested in generalization, multi-objective reinforcement learning, and open-endedness | Looking for PhD in RL in 2026 My works: https://scholar.google.com/citations?user=GnHpLE8AAAAJ&hl=en
Cool setup
30.10.2025 13:13 β π 0 π 0 π¬ 0 π 0Been feeling FOMO from all the ICLR posts the past 2 days. Will finally be at the conference tomorrow. Please do come by our poster and Iβm happy to chat! π
β°: Sat, 26 Apr 3pm
π: Hall 3 + Hall 2B Poster 374
Also, I'll be presenting this work at ICLR next month, please do come by!
05.03.2025 06:59 β π 4 π 0 π¬ 0 π 0Our benchmark code is already available for testing out new algorithms and I will be sharing additional instructions on using our code in the coming days. Stay tuned. I look forward to engaging and collaborating with anyone interested in advancing this new area of research! π
05.03.2025 06:59 β π 2 π 0 π¬ 1 π 0There are numerous promising avenues for further exploration, particularly in adapting techniques and insights from single-objective RL generalization research to tackle this harder problem setting!
05.03.2025 06:59 β π 0 π 0 π¬ 1 π 0Ultimately, a priori scalarization of rewards in single-objective RL limits the agent's flexibility to adapt its behavior to environment changes and objective tradeoffs. Developing agents capable of generalizing across multiple environments AND objectives will become a crucial research direction.
05.03.2025 06:59 β π 0 π 0 π¬ 1 π 0Our baseline evaluations of current MORL algorithms uncover 2 key insights:
1) Current MORL algorithms struggle with generalization.
2) However, MORL demonstrate greater potential for learning adaptable behaviors for generalization compared to single-objective.
We also introduce a benchmark featuring diverse multi-objective domains with parameterized environment configurations to facilitate studies in this area.
05.03.2025 06:59 β π 0 π 0 π¬ 1 π 0Despite its importance, the intersection of generalization and multi-objectivity remains a significant gap in RL literature.
In this paper, we formalize generalization in Multi-Objective Reinforcement Learning (MORL) and how it can be evaluated.
Consider an autonomous vehicle, which must not only generalize across varied environmental conditionsβdifferent weather patterns, lighting, and road surfacesβbut also learn optimal trade-offs between competing objectives such as travel time, passenger's comfort, and safety.
05.03.2025 06:59 β π 0 π 0 π¬ 1 π 0Real-world sequential decision-making tasks often involves balancing trade-offs among conflicting objectives and generalizing across diverse environments.
05.03.2025 06:59 β π 0 π 0 π¬ 1 π 0Our work "On Generalization Across Environments in Multi-Objective Reinforcement Learning" has been accepted at ICLR 2025!
Paper: arxiv.org/abs/2503.00799
Code: github.com/JaydenTeoh/M...
Authors: Jayden Teoh, Pradeep Varakantham, Peter Vamplew (@amp1874.bsky.social)
More below --->
Our ICRL paper on generalization in multi-objective reinforcement learning is now on arxiv: arxiv.org/abs/2503.00799
This work lead by Jayden Teoh is the first to examine generalization of RL across multiple multi-objective environments, and is a great basis for an exciting new field of research.