@rl-conference.bsky.social
27.08.2025 12:48 β π 1 π 0 π¬ 0 π 0@schaul.bsky.social
RL researcher at DeepMind https://schaul.site44.com/ π±πΊ
@rl-conference.bsky.social
27.08.2025 12:48 β π 1 π 0 π¬ 0 π 0Where do some of Reinforcement Learning's great thinkers stand today?
Find out! Keynotes of the RL Conference are online:
www.youtube.com/playlist?lis...
Wanting vs liking, Agent factories, Theoretical limit of LLMs, Pluralist value, RL teachers, Knowledge flywheels
(guess who talked about which!)
On my way to #ICML2025 to present our algorithm that strongly scales with inference compute, in both performance and sample diversity! π
Reach out if youβd like to chat more!
Deadline to apply is this Wednesday!
02.06.2025 09:40 β π 4 π 1 π¬ 0 π 0The RL team is a small team led by David Silver. We build RL algorithms and solve ambitious research challenges. As one of DeepMind's oldest teams, it has been instrumental in building DQN, AlphaGo, Rainbow, AlphaZero, MuZero, AlphaStar, AlphaProof, Gemini, etc. Help us build the next big thing!
24.05.2025 10:08 β π 2 π 0 π¬ 1 π 0Ever thought of joining DeepMind's RL team? We're recruiting for a research engineering role in London:
job-boards.greenhouse.io/deepmind/job...
Please spread the word!
When faced with a challenge (like debugging) it helps to think back to examples of how you've overcome challenges in the past. Same for LLMs!
The method we introduce in this paper is efficient because examples are chosen for their complementarity, leading to much steeper inference-time scaling! π§ͺ
RLC Keynote speakers: Leslie Kaelbling, Peter Dayan, Rich Sutton, Dale Schuurmans, Joelle Pineau, Michael Littman
Some extra motivation for those of you in RLC deadline mode: our line-up of keynote speakers -- as all accepted papers get a talk, they may attend yours!
@rl-conference.bsky.social
200 great visualisations: 200 facets and nuances of 1 planetary story.
31.01.2025 13:41 β π 5 π 0 π¬ 0 π 0The sound of two users joining per second: "tik", "tok"...
30.01.2025 11:39 β π 4 π 0 π¬ 0 π 0Reposting David Silver's talk about how RL is the way to intelligence. No particular reason
www.youtube.com/watch?v=pkpJ...
Announcement of Richard S. Sutton as RLC 2025 keynote speaker
Excited to announce the first RLC 2025 keynote speaker, a researcher who needs little introduction, whose textbook we've all read, and who keeps pushing the frontier on RL with human-level sample efficiency
08.01.2025 15:03 β π 51 π 4 π¬ 0 π 0Could language games (and playing many of them) be the renewable energy that Ilya was hinting at yesterday? They do address two core challenges of self-improvement -- let's discuss!
My talk is today at 11:40am, West Meeting Room 220-222, #NeurIPS2024
language-gamification.github.io/schedule/
Don't get to talk enough about RL during #neurips2024? Then join us for more, tomorrow night at The Pearl!
10.12.2024 22:42 β π 14 π 0 π¬ 0 π 0Dynamic programming has a fun origin story. In 1950, Bellman wanted to coin a term that "was something not even a Congressman could object to".
See here:
pubsonline.informs.org/doi/pdf/10.1...
This year's (first-ever) RL conference was a breath of fresh air! And now that it's established, the next edition is likely to be even better: Consider sending your best and most original RL work there, and then join us in Edmonton next summer!
02.12.2024 19:37 β π 19 π 3 π¬ 0 π 0Ohh... good morning to you too!
Clearly this got off on the wrong foot: do you want to try again, maybe more constructively (in the spirit of bluesky not being the other place)? This is a preprint, so I'd be happy to hear your suggestions for making it less "ignorant"...
Either one or many players. For "improvement" to be well-defined, one agent must be special (see footnote 6), but the multi-agent setting has many benefits.
30.11.2024 16:54 β π 1 π 0 π¬ 1 π 01: open-ended means that it will keep producing novel and learnable artifacts (see the definition here: arxiv.org/abs/2406.04268), on the timescale of interest for the observer.
2: I think as a thought experiment it is valid, as it could work in principle, but of course it hasn't been built?
In section 5 (second paragraph), there's about a dozen references to language games people are already using (one per paper), some with ingenious ways to provide feedback.
Also, I suspect the workshop will ultimately have the poster abstracts online with plenty of additional material!
I'll also be giving a talk about this at the @neuripsconf.bsky.social workshop on "Language Gamification" in two weeks. Pop by if you're around!
language-gamification.github.io
Are there limits to what you can learn in a closed system? Do we need human feedback in training? Is scale all we need? Should we play language games? What even is "recursive self-improvement"?
Thoughts about this and more here:
arxiv.org/abs/2411.16905
@colah.bsky.social: with a few years' hindsight, how do you see the Distill space now? Is there a chance for a reboot or a rebirth in another form?
28.11.2024 11:23 β π 2 π 0 π¬ 0 π 0I think the Distill journal was really valuable in this space, but unfortunately is no longer around to help...
distill.pub
If you're happy with a book-length answer (to the broader question on which technologies empower whom, why, and when), Acemoglu and Johnson have some excellent analysis:
shapingwork.mit.edu/power-and-pr...
Oh, this is my tribe!
Some other people here that I appreciate for their infectious positivity:
@akoopa.bsky.social
@jhamrick.bsky.social
@rockt.ai
@pcastr.bsky.social
@luisazintgraf.bsky.social
@dabelcs.bsky.social
@aditimavalankar.bsky.social
RLC will be held at the Univ. of Alberta, Edmonton, in 2025. I'm happy to say that we now have the conference's website out: rl-conference.cc/index.html
Looking forward to seeing you all there!
@rl-conference.bsky.social
#reinforcementlearning
Ok, we'll have to make sure a restricted the closed system generates an open-ended set of ideas then! π
20.11.2024 10:10 β π 2 π 0 π¬ 0 π 0Now if only that pack could keep growing in, say, an open-ended way...
20.11.2024 09:45 β π 8 π 0 π¬ 1 π 0@togelius.bsky.social often has out-of-distribution takes -- but be warned, some of his thoughts come in book-length: mitpress.mit.edu/978026254934...
17.11.2024 19:37 β π 3 π 0 π¬ 1 π 0