Foundation models on the AutoML podcast 2/3: are LLMs killing AutoML? It's probably not that simple. Listen for more details π
31.10.2025 11:31 β π 0 π 0 π¬ 0 π 0@theeimer.bsky.social
RL researcher looking for DACs // What is this AutoRL anyway? she/her Currently: Leibniz Uni Hannover Previously: Uni Freiburg (Master's) | Meta AI London (Intern) Always & Forever: AutoRL.org
Foundation models on the AutoML podcast 2/3: are LLMs killing AutoML? It's probably not that simple. Listen for more details π
31.10.2025 11:31 β π 0 π 0 π¬ 0 π 0Stealing all of the recommendations!
This made me think of The Left Hand Of Darkness, though I guess that's actually almost the opposite, communication bridging a seemingly impossible gap in understanding each other...
I fell into a hole, but made it out again with new episodes! This is part one of three of an accidental series on foundation models. The next parts will be released in October and November, so stay tuned!
22.09.2025 11:32 β π 4 π 0 π¬ 0 π 0Great opportunity to work with great people. Go apply!
28.08.2025 12:06 β π 1 π 0 π¬ 0 π 0New blog post: AI Allergy.
On my increasing disgust with the AI discourse, even though I still like the technical and philosophical. And how I wish I could be excited about AI again.
togelius.blogspot.com/2025/08/ai-a...
It is time
11.07.2025 01:04 β π 56 π 6 π¬ 5 π 2The "reproducibility crisis" in science constantly makes headlines. Repro efforts are often limited. What if you could assess reproducibility of an entire field?
That's what @brunolemaitre.bsky.social et al. have done. Fly immunity is highly replicable & offers lessons for #metascience
A π§΅ 1/n
Need for Speed or: How I Learned to Stop Worrying About Sample Efficiency
Part II of my blog series "Getting SAC to Work on a Massive Parallel Simulator" is out!
I've included everything I tried that didn't work (and why Jax PPO was different from PyTorch PPO)
araffin.github.io/post/tune-sa...
1/2 Offline RL has always bothered me. It promises that by exploiting offline data, an agent can learn to behave near-optimally once deployed. In real life, it breaks this promise, requiring large amount of online samples for tuning and has no guarantees of behaving safely to achieve desired goals.
30.05.2025 08:39 β π 7 π 3 π¬ 1 π 1Crazy volume! On the other hand, not that surprising. We also got one of these and only did so because it was such a good deal that even if our complete lack of experience makes research on it hard, we can use it for teaching only, and be okay with spending the money. I doubt we're the only ones!
27.05.2025 13:25 β π 3 π 0 π¬ 0 π 0π’ Only 3 Weeks to Go!
The AutoML summer school (June 10-13th) is just around the corner, and there is not much time left to register!
---> www.automlschool.org <---
π We added several new speakers to the program
Going to the hospital because I broke my wrist smashing the endorse button:
www.understandingai.org/p/i-got-fool...
We can only presume to build machines like us once
we see ourselves as machines first.
Abeba Birhane (2022, p. 13)
This is the core. So true.
Panel discussion on the current economic precarity of autonomous vehicle businesses. www.youtube.com/watch?v=gDG-...
"We are at a really tough spot in generating flows of cash right now." π
After a short era in which people questioned the value of academia in ML, its value is more obvious than ever. Big labs stopped publishing the minute commercial incentives showed up and are relentlessly focused on a singular vision of scaling. Academia is a meaningful complement, bringing...
1/2
It's strange to me that the focus of many people's worry is still "superintelligence" and not the reality we're currently living where increasingly authoritarian governments wield technology oppressively.
This fantastical distraction based on speculative rhetoric is increasingly harmful.
A sensible perspective on humanoids in manufacturing (TLDR: if you can make humanoids, you can probably make better, more manufacturing specific things)
blog.spec.tech/p/humanoid-r...
Mark your calendars, EWRL is coming to TΓΌbingen! π
When? September 17-19, 2025.
More news to come soon, stay tuned!
Llama 4 was a messy release: unreleased finetunes boosting scores, rumors of training on test, released on a weekend, etc
As (open) models are commoditized / competition grows, what is the role of Meta's Llama efforts in the future? Should they continue?
At least there is no need to jailbreak the model anymore π« (Is there a counterpart to make it nicer π?)
07.04.2025 10:55 β π 2 π 1 π¬ 0 π 0The school kids visiting me during this year's future day really had hard-hitting questions: "Do you still have a lot of free time?"
Me, a pretty fresh and currently slightly overwhelmed PostDoc: "It's important to be good at time management. Like my colleague, maybe you should ask her."
So far, 2,135 people have responded to the poll SΓΈren and I posted a few days ago. Of those, 94.4% replied βYesβ to being interested in officially presenting accepted @neuripsconf.bsky.social papers in Europe. (1/7)
03.04.2025 11:03 β π 80 π 24 π¬ 5 π 1German media I beg you one day just please go just one day without being obsessed with migration. One day. I promise it wonβt kill you. You have lakes and mountains and good football and good healthcare and asparagus. Youβll be fine.
01.04.2025 08:21 β π 2446 π 467 π¬ 54 π 27True, I've been "socialized" in the AutoML community, how to compare algorithms is a big deal there.
I remember discussing with my advisor whether it's worth evaluating issues with improper HPO setup in RL, he thought it was so obvious that everyone must already be doing it (spoiler: not really)
Well, then there's only one alternative: "We define OurPO as PPO with lr=0.01, ent_coef=0.1.... and compare it to OurQN which is DQN with lr=...." π
31.03.2025 20:56 β π 1 π 0 π¬ 0 π 0Tell them their argument might be valid with different hyperparameters
31.03.2025 03:19 β π 21 π 1 π¬ 1 π 0This obviously then also depends on budget, HPO method and combines performance and tunability into one score, but I think that's quite reasonable in practice. Not very satisfying for an empirical nihilist, though, I imagine π
31.03.2025 12:52 β π 1 π 0 π¬ 2 π 0Well, what validity are you looking for? The absolute "algorithm A is better than B on benchmark C" is hard wrt hyperparameters, but algorithm A is better than B on C given I can realistically try out 50 configurations" is what we often want in empirical ML anyway, no?
31.03.2025 12:52 β π 3 π 0 π¬ 1 π 0So true, Gilles.
Yes, it is a pretext task, but often, when we try real tasks, we find that the problems are not those we expected.
We need more people looking at relevant problems.
Kiri Wagstaff said this 15 years ago
arxiv.org/abs/1206.4656
My PhD supervisor discussed my first two or three reviews with me (including checking over wording etc.) and does that for all his PhDs, but I know that's not the standard in most other groups I'm familiar with...
26.03.2025 10:37 β π 1 π 0 π¬ 0 π 0