Luke Marris's Avatar

Luke Marris

@lukemarris.bsky.social

Research Engineer at Google DeepMind. Interests in game theory, reinforcement learning, and deep learning. Website: https://www.lukemarris.info/ Google Scholar: https://scholar.google.com/citations?user=dvTeSX4AAAAJ

672 Followers  |  170 Following  |  23 Posts  |  Joined: 25.11.2024  |  2.077

Latest posts by lukemarris.bsky.social on Bluesky

Preview
Re-evaluating Open-Ended Evaluation of Large Language Models A case study using the livebench.ai leaderboard.

[๐Ÿงต9/N] And, an interactive demo is available here: siqi.fr/public/re-ev...

22.04.2025 15:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Re-evaluating Open-Ended Evaluation of Large Language Models A case study using the livebench.ai leaderboard.

Frontier models are often compared on crowdsourced user prompts - user prompts can be low-quality, biased and redundant, making "performance on average" hard to trust.

Come find us at #ICLR2025 to discuss game-theoretic evaluation (shorturl.at/0QtBj)! See you in Singapore!

18.04.2025 16:34 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

๐Ÿ˜…๐Ÿ˜‚ Called out!

17.04.2025 17:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

[๐Ÿงต8/N] Come see our poster on 2025/04/24 at Poster Location: Hall 3 + Hall 2B #440.
iclr.cc/virtual/2025... #IRL

17.04.2025 16:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต7/N] Big thanks to the team @GoogleDeepMind! Siqi Liu (@liusiqi.bsky.social), Ian Gemp (@drimgemp.bsky.social), Luke Marris, Georgios Piliouras, Nicolas Heess, Marc Lanctot (@sharky6000.bsky.social)

17.04.2025 16:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

[๐Ÿงต6/N] In summary: Current open-ended LLM evals risk being brittle. Our game-theoretic framework w/ affinity entropy provides more robust, intuitive, and interpretable rankings, crucial for guiding real progress! ๐Ÿง  Check it out & let us know your thoughts! ๐Ÿ™
arxiv.org/abs/2502.20170

17.04.2025 16:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

[๐Ÿงต5/N] Does it work? YES! โœ…On real data (arena-hard-v0.1), our method provides intuitive rankings robust to redundancy. We added 500 adversarial prompts targeting the top model โ€“ Elo rankings tanked, ours stayed stable! (See Fig 3 ๐Ÿ‘‡). Scales & gives interpretable insights!

17.04.2025 16:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต4/N] But game theory isn't magic - standard methods often yield multiple equilibria & aren't robust to redundancy. Key innovation: We introduce novel solution concepts + 'Affinity Entropy' to find unique, CLONE-INVARIANT equilibria! โœจ(No more rank shifts just bc you added copies!)

17.04.2025 16:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต3/N] So, what's our fix? GAME THEORY! ๐ŸŽฒ We reframe LLM evaluation as a 3-player game: a 'King' model ๐Ÿ‘‘ vs. a 'Rebel' model ๐Ÿ˜ˆ, with a 'Prompt' player selecting tasks that best differentiate them. This shifts focus from 'average' performance to strategic interaction. #GameTheory #Evaluation

17.04.2025 16:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต2/N] Why the concern? Elo averages performance. If prompt sets are biased or redundant (intentionally or not!), rankings can be skewed. ๐Ÿ˜Ÿ Our simulations show this can even reinforce biases, pushing models to specialize narrowly instead of improving broadly (see skill entropy drop!). ๐Ÿ“‰ #EloRating

17.04.2025 16:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต1/N] Thrilled to share our work "Re-evaluating Open-Ended Evaluation of Large Language Models"! ๐Ÿš€ Popular LLM leaderboards (think Elo/Chatbot Arena) are useful, but are they telling the whole story? We find issues w/ redundancy & bias. ๐Ÿค”
Paper @ ICLR 2025: arxiv.org/abs/2502.20170 #LLM #ICLR2025

17.04.2025 16:12 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
SCaLA-25 A workshop connecting research topics in social choice and learning algorithms.

Working at the intersection of social choice and learning algorithms?

Check out the 2nd Workshop on Social Choice and Learning Algorithms (SCaLA) at @ijcai.bsky.social this summer.

Submission deadline: May 9th.

I attended last year at AAMAS and loved it! ๐Ÿ‘

sites.google.com/corp/view/sc...

26.03.2025 20:18 โ€” ๐Ÿ‘ 18    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Post image Post image

๐ŸฅIntroducing Gemini 2.5, our most intelligent model with impressive capabilities in advanced reasoning and coding.

Now integrating thinking capabilities, 2.5 Pro Experimental is our most performant Gemini model yet. Itโ€™s #1 on the LM Arena leaderboard. ๐Ÿฅ‡

25.03.2025 17:25 โ€” ๐Ÿ‘ 216    ๐Ÿ” 65    ๐Ÿ’ฌ 34    ๐Ÿ“Œ 11

Looking for a principled evaluation method for ranking of *general* agents or models, i.e. that get evaluated across a myriad of different tasks?

Iโ€™m delighted to tell you about our new paper, Soft Condorcet Optimization (SCO) for Ranking of General Agents, to be presented at AAMAS 2025! ๐Ÿงต 1/N

24.02.2025 15:25 โ€” ๐Ÿ‘ 63    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 6
Post image

[๐Ÿงต13/N] It is also possible to plot each task's contribution to the deviation rating, enabling to quickly see the trade-offs between the models. Negative bars mean worse than equilibrium at that task. So Sonnet is relatively weaker at "summarize" and Llama is relatively weaker at "LCB generation".

24.02.2025 14:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

[๐Ÿงต12/N] We are convinced this is a better approach than Elo or simple averaging. Please read the paper for more details! ๐Ÿค“

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต11/N] Our work proposes the first rating method, โ€œDeviation Ratingsโ€, that is both dominant- and clone-invariant in fully general N-player, general-sum interactions, allowing us to evaluate general models in a theoretically grounded way. ๐Ÿ‘

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต10/N] A three-player game with two-symmetric models players try to beat each other (by playing strong models) on a task selected by task player incentivised to separate models is an improved formulation. ๐Ÿ‘ However Nash Averaging is only defined for two-player zero-sum games. ๐Ÿ˜ญ

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต9/N] Unfortunately, a two-player zero-sum interaction is limiting. For example, if no model can solve a task, the task player would only play that impossible task, resulting in uninteresting ratings. ๐Ÿ™

18.02.2025 10:49 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต8/N] This is hugely powerful for two reasons. 1) When including tasks in the evaluation set one can be maximally inclusive: redundancies are axiomatically ignored which simplifies curation for evaluation. 2) Salient strategies are automatically reweighted according to their significance. ๐Ÿ’ช

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต7/N] This approach is provably clone- and dominant-invariant: adding copies of tasks and models, or adding dominated tasks and models, does not influence the rating *at all*. The rating is invariant to two types of redundancies! ๐Ÿคฉ Notably, neither an average nor Elo have these properties.

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Re-evaluating Evaluation Progress in machine learning is measured by careful evaluation on problems of outstanding common interest. However, the proliferation of benchmark suites and environments, adversarial attacks, and oth...

[๐Ÿงต6/N] A previous approach, called Nash Averaging (arxiv.org/abs/1806.02643), formulated the problem as a two-player zero-sum game where a model player maximizes performance on tasks by playing strong models and a task player minimises performance by selecting difficult tasks. โ™Ÿ๏ธ

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต5/N] Therefore, there is a strategic decision on which tasks are important, and which model is the best. Where there is a strategic interaction, it can be modeled as a game! Model players select models, and task players select tasks. The players may play distributions to avoid being exploited.

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต4/N] The tasks may be samples from an infinite space of tasks. Is the distribution of prompts submitted to LMSYS representative of the diversity and utility of skills we wish to evaluate LLMs on? Can we even agree on such a distribution, if one even exists? ๐Ÿค”

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต3/N] Furthermore, the set of tasks a model is evaluated on may not be curated. For example, the frequency of tasks may not be proportional to their importance. Tasks may also differ in difficulty and breadth of underlying skills measured. ๐Ÿ“

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[๐Ÿงต2/N] Game-theoretic ratings are useful for evaluating generalist models (including LLMs), where there are many underlying skills/tasks to be mastered. Usually no single model is dominant on all metrics. ๐Ÿฅ‡

18.02.2025 10:49 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Deviation Ratings: A General, Clone-Invariant Rating Method Many real-world multi-agent or multi-task evaluation scenarios can be naturally modelled as normal-form games due to inherent strategic (adversarial, cooperative, and mixed motive) interactions. These...

[๐Ÿงต1/N] Please check out our new paper (arxiv.org/abs/2502.11645) on game-theoretic evaluation. It is the first method that results in clone-invariant ratings in N-player, general-sum interactions. Co-authors: @liusiqi.bsky.social , Ian Gemp, Georgios Piliouras, @sharky6000.bsky.social ๐ŸŽ‰

18.02.2025 10:49 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3

@lukemarris is following 20 prominent accounts