Florian Dorner's Avatar

Florian Dorner

@flodorner.bsky.social

PhD student in CS @ ETHZ / MPI-IS Theory of ML evaluation https://flodorner.github.io/

77 Followers  |  278 Following  |  38 Posts  |  Joined: 05.12.2024  |  2.0324

Latest posts by flodorner.bsky.social on Bluesky

In light of the discussions about LLM-generated ICLR reviews, I recently wondered whether a similar dynamic might play out for LLMs: While pre-training objectives promote approximate indistinguishability of generated text, more and more heavy post-training might make detection a lot easier...

12.12.2025 18:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Limits to scalable evaluation at the frontier: LLM as Judge won't beat twice the data High quality annotations are increasingly a bottleneck in the explosively growing machine learning ecosystem. Scalable evaluation methods that avoid costly annotation have therefore become an importan...

In the second paper (arxiv.org/abs/2410.13341), we show that LLM judges weaker than the models they evaluate are of limited use for benchmarking, even if their judgments are processed in a statistically optimal way. Correspondingly, we cannot rely on LLM judges for evaluating frontier models.

05.12.2025 08:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
ROC-n-reroll: How verifier imperfection affects test-time scaling Test-time scaling aims to improve language model performance by leveraging additional compute during inference. Many works have empirically studied techniques such as Best-of-N (BoN) and Rejection Sam...

In the first paper (arxiv.org/abs/2507.12399), we characterize how LLM judge errors affect test-time-scaling via Best-of-N based on the verifier ROC curve. Our results point towards more efficient alternatives to Best-of-N, and explain why scaling laws for test-time-scaling are unreliable.

05.12.2025 08:57 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Meet me at the Benchmarking workshop (sites.google.com/view/benchma...) at EurIPS on Saturday: We’ll present two works on errors in LLM-as-Judge and their impacts on benchmarking and test-time-scaling:

05.12.2025 08:57 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

I'll be @neuripsconf.bsky.social presenting Strategic Hypothesis Testing (spotlight!)

tldr: Many high-stakes decisions (e.g., drug approval) rely on p-values, but people submitting evidence respond strategically even w/o p-hacking. Can we characterize this behavior & how policy shapes it?

1/n

01.12.2025 20:31 β€” πŸ‘ 17    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Also, from time to time, the wrong proofs it suggests for more complicated things seem to contain non-trivial insights and are "fixable".

25.10.2025 15:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Not much of a step up compared to the o1/o3 "thinking" versions of GPT-4. But quite a big step compared to base GPT-4. It still makes a lot of mistakes, but often produces correct proofs for simple Lemmata (not so much for more complicated stuff).

25.10.2025 15:38 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Vivian Nastl and Ricardo Dominguez-Olmedo receive 2025 Google Ph.D. Fellowship Program supports exceptional graduate students working on innovative research in computer science and related fields

Congratulations also to Vivian Nastl (supervised by Moritz Hardt) and Ricardo Dominguez-Olmedo (Moritz Hardt and Bernhard SchΓΆlkopf) for winning 2025 Global Google PhD fellowships.
Find out more about their work here: is.mpg.de/en/news/vivi...

@maxplanckcampus.bsky.social @unituebingen.bsky.social

24.10.2025 09:33 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image

The viral "Definition of AGI" paper tells you to read fake references which do not exist!

Proof: different articles present at the specified journal/volume/page number, and their titles exist nowhere on any searchable repository.

Take this as a warning to not use LMs to generate your references!

18.10.2025 00:54 β€” πŸ‘ 157    πŸ” 36    πŸ’¬ 6    πŸ“Œ 16

Assuming all problems are actually solvable...

17.10.2025 21:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Is that not trivially true, since LLMs assign nonzero probability to any possible string?

17.10.2025 21:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We (w/ Moritz Hardt, Olawale Salaudeen and
@joavanschoren.bsky.social) are organizing the Workshop on the Science of Benchmarking & Evaluating AI @euripsconf.bsky.social 2025 in Copenhagen!

πŸ“’ Call for Posters: rb.gy/kyid4f
πŸ“… Deadline: Oct 10, 2025 (AoE)
πŸ”— More info: rebrand.ly/bg931sf

22.09.2025 13:45 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0

Do you have a list of the best ones? I vaguely recall reading things in this direction, but cannot really remember specific titles.

21.09.2025 20:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Wouldn’t it be great to have questions about LM internals answered in plain English? That’s the promise of verbalization interpretability. Unfortunately, our new paper shows that evaluating these methods is nuancedβ€”and verbalizers might not tell us what we hope they do. πŸ§΅πŸ‘‡1/8

17.09.2025 19:19 β€” πŸ‘ 26    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

The focus on evaluating checkpoints during a training run rather than different trained models is super interesting!

17.09.2025 05:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How Benchmark Prediction from Fewer Data Misses the Mark Large language model (LLM) evaluation is increasingly costly, prompting interest in methods that speed up evaluation by shrinking benchmark datasets. Benchmark prediction (also called efficient LLM ev...

Interesting work! Can you comment a bit on what you do different compared to previous IRT-based LLM evaluation methods?

We recently did some work confirming IRTs efficacy for in-distribution models, but also found it to be quite brittle when it comes to novel models arxiv.org/abs/2506.07673

17.09.2025 05:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I guess in terms of the notation from section 4 in the paper, does this plot Type X risk, or Type X Error Feasibility rate?

14.09.2025 14:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

, at least for large n. So I am trying to understand whether the asymptotics kick in a lot slower than I would have thought, or whether I am missing something else about the setup., at least for large n.

14.09.2025 14:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thank you! Do I understand correctly that these results are independent/orthogonal from the success hacking ones? I guess my confusion stems from asymptotic theory for PPI (and by extension seemingly for DSL) suggesting that both type 1 and type 2 errors should be lower/at most very similar

14.09.2025 14:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Are the reported errors for the case of selecting the model with the most significant results, post-hoc?

12.09.2025 19:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Interesting work! Can you comment a bit more on the setup for the regression correction methods? As far as I understand, PPI++ (which should be quite similar to DSL) relatively reliably reduces variance compared to ground truth only, while remaining quite close to unbiased.

12.09.2025 19:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

Does anyone have background on this plot, compared to the 32% performance for o3-mini-high with tool use claimed by OpenAI in January? #GPT5 #GPT-5

openai.com/index/introd...
openai.com/index/openai...

08.08.2025 09:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Limits to scalable evaluation at the frontier: LLM as Judge won't beat twice the data High quality annotations are increasingly a bottleneck in the explosively growing machine learning ecosystem. Scalable evaluation methods that avoid costly annotation have therefore become an importan...

Super interesting field, but worth keeping in mind that this usually only buys you a relatively small fraction of "extra ground truth labels" (this does not cover active sampling strategies, but I haven not seen them yielding much larger improvements in practice, either) arxiv.org/abs/2410.13341

23.07.2025 13:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Do you have a source re: attendance requirement? πŸ‘€

17.07.2025 17:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Not sure this can ethically be done retroactively (due to participant consent). But given that 20% of data is shared with model providers, privacy concerns with instead sharing this data publically in the future seem surmountable.

10.05.2025 08:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
How to Fix the Chatbot Arena? Release All Data

New blogpost by my colleague Ricardo, arguing that instead of limiting data collection from big labs, LMArena should publicly release all data for everyone. ricardodominguez.github.io/blogs/arena....

10.05.2025 08:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Is this just the prompts, or do model providers get information about whether or not they won (and the competing response)?

30.04.2025 14:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Shout out to my colleagues Ricardo Dominguez-Olmedo, Vivian Nastl and Moritz Hardt! If you’d like to chat at the conference, send me a message, or visit us at one of the poster sessions!

24.04.2025 01:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 24.04.2025 01:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Limits to scalable evaluation at the frontier: LLM as Judge won't beat twice the data High quality annotations are increasingly a bottleneck in the explosively growing machine learning ecosystem. Scalable evaluation methods that avoid costly annotation have therefore become an importan...

Tomorrow, I will speak about our work on the limitations of LLM-as-a-Judge πŸ€– when applied to evaluating frontier models. (Session 3D)
arxiv.org/abs/2410.13341

24.04.2025 01:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@flodorner is following 20 prominent accounts