On my way to #ICML2025 to present our algorithm that strongly scales with inference compute, in both performance and sample diversity! π
Reach out if youβd like to chat more!
@aditimavalankar.bsky.social
Research Scientist at DeepMind working on Gemini Thinking
On my way to #ICML2025 to present our algorithm that strongly scales with inference compute, in both performance and sample diversity! π
Reach out if youβd like to chat more!
New side project!
assayer: A simple Python-RQ based tool to automatically monitor and evaluate ML model checkpoints offline during training.
Ever thought of joining DeepMind's RL team? We're recruiting for a research engineering role in London:
job-boards.greenhouse.io/deepmind/job...
Please spread the word!
Accepted to #ICML2025
See you in Vancouver!
When faced with a challenge (like debugging) it helps to think back to examples of how you've overcome challenges in the past. Same for LLMs!
The method we introduce in this paper is efficient because examples are chosen for their complementarity, leading to much steeper inference-time scaling! π§ͺ
This was a really fun collaboration with my brilliant collaborators Hassan Mansoor, Zita Marinho, Masha Samsikova, and @schaul.bsky.social!
17.03.2025 11:16 β π 1 π 0 π¬ 0 π 0In addition to this, AuPair has been shown to work better across CodeForces difficulty levels and preserve coverage of problem categories from the training data distribution (see paper for more details).
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 04) the responses produced by the model have high diversity for the more performant models.
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 03) our approach exhibits strong scaling with inference-time compute, and even after 100+ LLM calls, we do not see plateauing in the scaling curve;
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 02) we observe strong generalisation across datasets and models, implying that the process of curating these examples can be performed once and the benefits in performance can be reaped multiple times;
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 0Injecting different examples into the prompt has several benefits: 1) we see significant gains in performance compared to best-of-N and self-repair baselines on multiple model families: Gemini, Gemma, and GPT;
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 0Fun fact: the title βAuPairβ has multiple interpretations: at a higher level, it guides LLMs to better behaviour with a predefined set of examples; it is also a conjunction of Au, the chemical symbol for gold, and pair, i.e. golden pairs!
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 0For the coding domain, a golden example pair, or AuPair, contains the problem description, an incorrect guess, and a fix that improves the solution.
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 0Our submodular approach yields a fixed ordered set of complementary and useful AuPairs. For a budget of N LLM calls, the model is given N different prompts to answer the same question, where each prompt contains a different golden example.
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 0The key idea underlying our approach is simple: our approach curates a fixed set of golden examples (AuPairs) that are provided as 1-shot in-context examples during inference. We show that using AuPairs significantly improves code repair performance and scales well with inference compute!
17.03.2025 11:16 β π 1 π 0 π¬ 1 π 0Excited to share our recent work, AuPair, an inference-time technique that builds on the premise of in-context learning to improve LLM coding performance!
arxiv.org/abs/2502.18487
π§΅
Are there limits to what you can learn in a closed system? Do we need human feedback in training? Is scale all we need? Should we play language games? What even is "recursive self-improvement"?
Thoughts about this and more here:
arxiv.org/abs/2411.16905
πππ
23.11.2024 12:05 β π 1 π 0 π¬ 0 π 0