We just released AlgoPerf v0.6! ๐
โ
Rolling leaderboard
โ
Lower compute costs
โ
JAX jit migration
โ
Bug fixes & flexible API
Coming soon: More contemporary baselines + an LM workloadโฆ
github.com/mlcommons/al...
@georgeedahl.bsky.social
Machine learning researcher @Google DeepMind. My opinions do not necessarily represent my employer. Prefer email over DMs. https://scholar.google.com/citations?hl=e&user=ghbWy-0AAAAJ https://www.cs.toronto.edu/~gdahl/
We just released AlgoPerf v0.6! ๐
โ
Rolling leaderboard
โ
Lower compute costs
โ
JAX jit migration
โ
Bug fixes & flexible API
Coming soon: More contemporary baselines + an LM workloadโฆ
github.com/mlcommons/al...
My team is hiring a research scientist or engineer to work on methodological deep learning research! We study how to improve the deep learning "workflow" (developers.google.com/machine-lear...) with a special emphasis on training algorithms and recipes job-boards.greenhouse.io/deepmind/job...
18.07.2025 20:45 โ ๐ 7 ๐ 3 ๐ฌ 0 ๐ 0The explainer video: www.youtube.com/watch?v=_yX1...
03.04.2025 11:15 โ ๐ 7 ๐ 2 ๐ฌ 0 ๐ 0We're all about acceleration! ๐
Watch @priya-kasimbeg.bsky.social & @fsschneider.bsky.social speedrun an explanation of the AlgoPerf benchmark, rules, and results all within a tight 5 minutes for our #ICLR2025 paper video on "Accelerating Neural Network Training". See you in Singapore!
Hi there! This account will post about the AlgoPerf benchmark and leaderboard updates for faster neural network training via better training algorithms. But let's start with what AlgoPerf is, what we have done so far, and how you can train neural nets ~30% faster.
14.03.2025 20:56 โ ๐ 6 ๐ 3 ๐ฌ 1 ๐ 1Making LLMs run efficiently can feel scary, but scaling isnโt magic, itโs math! We wanted to demystify the โsystems viewโ of LLMs and wrote a little textbook called โHow To Scale Your Modelโ which weโre releasing today. 1/n
04.02.2025 18:54 โ ๐ 95 ๐ 28 ๐ฌ 3 ๐ 8