The explainer video: www.youtube.com/watch?v=_yX1...
03.04.2025 11:15 β π 7 π 2 π¬ 0 π 0@algoperf.bsky.social
AlgoPerf benchmark for faster neural network training via better training algorithms
The explainer video: www.youtube.com/watch?v=_yX1...
03.04.2025 11:15 β π 7 π 2 π¬ 0 π 0We're all about acceleration! π
Watch @priya-kasimbeg.bsky.social & @fsschneider.bsky.social speedrun an explanation of the AlgoPerf benchmark, rules, and results all within a tight 5 minutes for our #ICLR2025 paper video on "Accelerating Neural Network Training". See you in Singapore!
More details:
π ICLR 2025 results paper: openreview.net/pdf?id=CtM5x...
π Benchmark paper: arxiv.org/abs/2306.07179
π» AlgoPerf codebase: github.com/mlcommons/al...
This is just the beginning! With the help of the community, we want to test even more training recipes, push the SOTA for neural network training methods even further, and improve the AlgoPerf benchmark. Follow us & join the working group (mlcommons.org/working-grou...).
14.03.2025 20:56 β π 1 π 0 π¬ 1 π 0And the winner in the self-tuning ruleset, based on Schedule Free AdamW, demonstrated a new level of effectiveness for completely hyperparameter-free neural network training. Roughly ~10% faster training, compared to a NadamW baseline with well-tuned default hyperparameters.
14.03.2025 20:56 β π 0 π 0 π¬ 1 π 0Then, we asked the community to submit training algorithms. The results? The winner of the external tuning ruleset, using Distributed Shampoo, reduced training time by ~30% over our well-tuned baselineβshowing that non-diagonal methods can beat Adam, even in wall-clock time!
14.03.2025 20:56 β π 1 π 0 π¬ 1 π 0(3) Training algorithms must perform across 8 realistic deep learning workloads (ResNet-50, Conformer, ViT, etc.). (4) Submissions compete on the runtime to reach a given performance threshold. (5) Hyperparameter tuning is explicitly accounted for with our tuning rulesets.
14.03.2025 20:56 β π 0 π 0 π¬ 1 π 0Over the past years, we've built the AlgoPerf: Training Algorithms benchmark. The core ideas: (1) Only the training algorithm changes; everything else (hardware, model, data) stays fixed. (2) Submissions compete directly; no more weak baselines!
14.03.2025 20:56 β π 0 π 0 π¬ 1 π 0These choices are often critical, but reliable empirical guidance is scarce. Instead, we rely on expert intuition, anecdotal evidence, and babysitting. Check out this learning rate schedule from the OPT paper, which was manually determined. There has to be a better way!
14.03.2025 20:56 β π 0 π 0 π¬ 1 π 0Currently, training neural nets is a complicated & fragile process with many important choices: How should I set/tune the learning rate? Using what schedule? Should I use SGD or Adam (or maybe Nadam/Amos/Shampoo/SOAP/Muon/... the list is virtually endless)?
14.03.2025 20:56 β π 0 π 0 π¬ 1 π 0Hi there! This account will post about the AlgoPerf benchmark and leaderboard updates for faster neural network training via better training algorithms. But let's start with what AlgoPerf is, what we have done so far, and how you can train neural nets ~30% faster.
14.03.2025 20:56 β π 6 π 3 π¬ 1 π 1