New ChatGPT data just dropped
07.08.2025 23:29 β π 38 π 7 π¬ 0 π 1@marcelhussing.bsky.social
PhD student at the University of Pennsylvania. Currently, intern at MSR. Interested in reliable and replicable reinforcement learning and using it for knowledge discovery: https://marcelhussing.github.io/ All posts are my own.
New ChatGPT data just dropped
07.08.2025 23:29 β π 38 π 7 π¬ 0 π 1My PhD journey started with me fine-tuning hparams of PPO which ultimately led to my research on stability. With REPPO, we've made a huge step in the right direction. Stable learning, no tuning on a new benchmark, amazing performance. REPPO has the potential to be the PPO killer we all waited for.
17.07.2025 19:41 β π 7 π 2 π¬ 0 π 0GIF showing two plots that symbolize the REPPO algorithm. On the left side, four curves track the return of an optimization function, and on the right side, the optimization paths over the objective function are visualized. The GIF shows that monte-carlo gradient estimators have a high variance and fail to converge, while surrogate function estimators converge smoothly, but might find suboptimal solutions if the surrogate function is imprecise.
π₯ Presenting Relative Entropy Pathwise Policy Optimization #REPPO π₯
Off-policy #RL (eg #TD3) trains by differentiating a critic, while on-policy #RL (eg #PPO) uses Monte-Carlo gradients. But is that necessary? Turns out: No! We show how to get critic gradients on-policy. arxiv.org/abs/2507.11019
Works that use #VAML/ #MuZero losses often use deterministic models. But if we want to use stochastic models to measure uncertainty or because we want to leverage current SOTA models such as #transformers and #diffusion, we need to take care! Naively translating the loss functions leads to mistakes!
19.06.2025 15:20 β π 7 π 4 π¬ 1 π 0Dhruv Rohatgi will be giving a lecture on our recent work on comp-stat tradeoffs in next-token prediction at the RL Theory virtual seminar series (rl-theory.bsky.social) tomorrow at 2pm EST! Should be a fun talk---come check it out!!
26.05.2025 19:19 β π 11 π 5 π¬ 1 π 0Just arrived in Montreal for my internship at FAIR. So far Montreal has been amazing, great walkable areas, good food and nice people! Although I must say I have to get used to being addressed in French π
26.05.2025 16:23 β π 6 π 0 π¬ 0 π 0We'll be presenting our work on Oracle-Efficient Reinforcement Learning for Max Value Ensembles at the RL theory seminar! Been following this series for a while, super excited we get to present some of our work. π₯³
25.04.2025 14:22 β π 7 π 1 π¬ 0 π 0Many great papers from Mila!
Two by my team at the Adaptive Agents Lab (Adage) together with collaborators:
A Truncated Newton Method for Optimal Transport
openreview.net/forum?id=gWr...
MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL
openreview.net/forum?id=6Rt...
#ICLR2025
π’ Deadline Extension Alert! π’
Good news! Weβre extending the #CoLLAs2025 submission deadlines:
π Abstracts: Feb 26, 2025, 23:59 AoE
π Papers: Mar 3, 2025, 23:59 AoE
More time to refine your workβdon't miss this chance to contribute to #lifelong-learning research! π
π lifelong-ml.cc
I was very hyped about this place initially, now I come here, see 5 posts about politics, unfollow 5 people and close the website. Where are the interesting AI posts?
22.02.2025 23:05 β π 0 π 0 π¬ 1 π 0Can you solve group-conditional online conformal prediction with a no-regret learning algorithm? Not with vanilla regret, but -yes- with swap regret. And algorithms from the follow-the-regularized leader family (notably online gradient descent) work really well for other reasons.
18.02.2025 13:19 β π 21 π 2 π¬ 1 π 0Bummed out about recent politics & news drowning out AI and science you want to see on Bluesky?
Well, here is a small "sky thread" (written on a βοΈ) about something I recently discovered: e-values!
They are an alternative to the standard p-values as a measure of statistical significance. 1/N
Throwing compute at things has proven quite powerful in other domains but until recently not as much in #ReinforcementLearning.
Excited to share that out MAD-TD paper got a spotlight at #ICLR25! Check out Claas' thread on how to get the most out of your compute/data buck when training from scratch.
I agree with the notion but I don't think "things being outdated" is always bad. I'm of the opinion that we should still teach SVMs/Kernels as they teach us a different way to think about ML. PCA is still a core tool to teaching low-dim embeddings to students. We need as many tools as possible.
09.02.2025 18:23 β π 0 π 0 π¬ 0 π 0Are there no spotlights this year? Do we know?
09.02.2025 15:59 β π 0 π 0 π¬ 0 π 0EC 2025 (S)PC --- lets get ready for the Super Bowl! Every time there is a first down, bid on a paper. Field goal? Bid on two. Touchdown? Bid on 5 papers (10 if its the Eagles!) At the halftime show enter your topic preferences and conflicts. Lets go birds!
08.02.2025 19:33 β π 17 π 1 π¬ 3 π 3It's called exploratory preference optimization arxiv.org/abs/2405.21046 by @djfoster.bsky.social and others :)
08.02.2025 18:58 β π 4 π 0 π¬ 1 π 0π¨π¨ RLC deadline has been extended by a week! Abstract deadline is Feb. 21 with a paper deadline of Feb. 28 π¨π¨. Please spread the word!
08.02.2025 18:05 β π 25 π 13 π¬ 1 π 2This is huge, I might be able to make it now woohoo
08.02.2025 18:56 β π 1 π 0 π¬ 0 π 0What a future work section should be:
Oh, and here is this interesting and hard open problem that someone should solve.
Future work sections in empirical ML papers:
We leave hyperparameter optimization for future work.
My new year's resolution is to spend more time thinking. Last year I found myself deep in the nitty gritty of creating solutions. While that is important it is also necessary to reflect and look at the bigger picture. Entering my 5th year, I will try to focus more on defining the next problems.
01.01.2025 11:51 β π 6 π 0 π¬ 0 π 0Apparently I'm in the top 1% of wandb users. Good or bad sign?
27.12.2024 03:06 β π 1 π 0 π¬ 1 π 0Wish I could recommend the Kingkiller Chronicle but we may never see an ending. If you are fine with that, the first two been my favorite books for years and I still go back regularly.
26.12.2024 07:06 β π 2 π 0 π¬ 0 π 0One key is to follow people and simply engage. Like I spent years on twitter and ended up with 300 followers. Here, I felt much more appreciated. I want to tweet things because people may read it.
25.12.2024 23:33 β π 9 π 0 π¬ 1 π 0Reposting a postdoc opportunity in field robotics at Penn, in case anyone missed it the first time around.
23.12.2024 05:15 β π 16 π 4 π¬ 0 π 0With great power comes great responsibility π§
22.12.2024 03:42 β π 1 π 0 π¬ 0 π 0Done!
21.12.2024 20:22 β π 2 π 0 π¬ 1 π 0Microsoft's Computational Social Science group may have the opportunity to hire one researcher
Senior: 0-3 yrs post PhD
jobs.careers.microsoft.com/global/en/jo...
Principal: 3+ yrs post PhD
jobs.careers.microsoft.com/global/en/sh...
Please note: our ability to hire this season is not certain
Anyone who shills Hydra gets a retweet. Not using it borders on malpractice.
19.12.2024 00:35 β π 10 π 1 π¬ 1 π 0