Looking forward to chat about limitations of AI annotators/LLM-as-a-Judge, opportunities for improving them, evaluating AI personality/character, and the future of evals more broadly!
27.07.2025 15:22 β π 1 π 0 π¬ 0 π 0@arduin.io.bsky.social
Working on evaluation of AI models (via human and AI feedback) | PhD candidate @cst.cam.ac.uk Web: https://arduin.io Github: https://github.com/rdnfn Latest project: https://app.feedbackforensics.com
Looking forward to chat about limitations of AI annotators/LLM-as-a-Judge, opportunities for improving them, evaluating AI personality/character, and the future of evals more broadly!
27.07.2025 15:22 β π 1 π 0 π¬ 0 π 0π I'll be at #ACL2025 presenting research from my Apple internship! Our poster is titled: "Can External Validation Tools Improve Annotation Quality for LLM-as-a-Judge?"
β Let's meet: come by our poster on Tuesday (29/7), 10:30 - 12:00, Hall 4/5, or DM me to set up a meeting!
βοΈ Paper link below β
Excited to be in Singapore for ICLR! Keen to chat about interpreting feedback data and detecting model characteristics βοΈ
Reach out or come by our poster on Inverse Constitutional AI on Friday 25 April from 10am-12.30pm (#520 in Hall 2B) - @timokauf.bsky.social and I will be there!
If you want to understand your own model and data better, try Feedback Forensics!
πΎ Install it from GitHub: github.com/rdnfn/feedba...
β―οΈ View interactive results: app.feedbackforensics.com?data=arena_s...
See the accompanying blog post for all the details: arduin.io/blog/llama4-analysis
Preliminary analysis. Usual caveats for AI annotators and potentially inconsistent sampling procedures apply.
βοΈ Conclusion: The differences between the arena and the public version of Llama 4 Maverick highlight the importance of having a detailed understanding of preference data beyond single aggregate numbers or rankings! (Feedback Forensics can help!)
17.04.2025 13:55 β π 0 π 0 π¬ 1 π 0π Bonus 2: Humans like the arena modelβs behaviours
Human annotators on Chatbot Arena indeed like the change in tone, more verbose responses and adapted formatting.
π Bonus 1: Things that stayed consistent
I also find that some behaviours stayed the same: on the Arena dataset prompts, the public and arena model versions are similarly very unlikely to suggest illegal activities, be offensive or use inappropriate language.
β‘οΈ Further differences: Clearer reasoning, more references, β¦
There are quite a few other differences between the two models beyond the three categories already mentioned. See the interactive online results for a full list: app.feedbackforensics.com?data=arena_s...
3οΈβ£ Third: Formatting - a lot of it!
The arena model uses more bold, italics, numbered lists and emojis relative to its public version.
2οΈβ£ Second: Tone - friendlier, more enthusiastic, more humour β¦
Next, the results highlight how much friendlier, emotional, enthusiastic, humorous, confident and casual the arena model is relative to its own public weights version (and also its opponent models).
So how exactly is the arena version different to the public Llama 4 Maverick model? I make a few observationsβ¦
1οΈβ£ First and most obvious: Responses are more verbose. The arena modelβs responses are longer relative to the public version for 99% of prompts.
π Note on interpreting metrics: values above 0 β characteristic more present in arena model's responses than public model's. See linked post for details
17.04.2025 13:55 β π 0 π 0 π¬ 1 π 0π§ͺ Setup: I use the original Arena dataset of Llama-4-Maverick experimental generations, kindly released openly by @lmarena (π). I compare the arena modelβs responses to those generated by its public weights version (via Lambda and OpenRouter).
17.04.2025 13:55 β π 0 π 0 π¬ 1 π 0βΉοΈ Background: Llama 4 Maverick was released earlier this month. Beforehand, a separate experimental Arena version was evaluated on Chatbot Arena (Llama-4-Maverick-03-26-Experimental). Some have reported that these two models appear to be quite different.
17.04.2025 13:55 β π 0 π 0 π¬ 1 π 0How exactly was the initial Chatbot Arena version of Llama 4 Maverick different from the public HuggingFace version?π΅οΈ
I used our Feedback Forensics app to quantitatively analyse how exactly these two models differ. An overviewβ¦ππ§΅
Feedback Forensics is just getting started with this Alpha release with lots of exciting features and experiments on the roadmap. Let me know what other datasets we should analyze or which features you would like to see! π΅π»
17.03.2025 18:12 β π 5 π 0 π¬ 0 π 0Big thanks also to my collaborators on Feedback Forensics and the related Inverse Constitutional Al (ICAI) pipeline: Timo Kaufmann, Eyke HΓΌllermeier, @samuelalbanie.bsky.social, Rob Mullins!
Code: github.com/rdnfn/feedback-forensics
Note: usual limitations for LLM-as-a-Judge-based systems apply.
... harmless/helpful data by @anthropic.com, and finally the recent OLMo 2 preference mix by @ljvmiranda.bsky.social, @natolambert.bsky.social et al., see all results at app.feedbackforensics.com.
17.03.2025 18:12 β π 0 π 0 π¬ 1 π 0We analyze several popular feedback datasets: Chatbot Arena data with topic labels from the Arena Explorer pipeline, PRISM data by @hannahrosekirk.bsky.social et al, AlpacaEval annotations, ...
17.03.2025 18:12 β π 1 π 0 π¬ 1 π 0π€ 3. Discovering model strengths
How is GPT-4o different to other models? β Uses more numbered lists, but Gemini is more friendly and polite
app.feedbackforensics.com?data=chatbot...
π§βπ¨π§βπΌ 2. Finding preference differences between task domains
How do preferences differ across writing tasks? β Emails should be concise, creative writing more verbose
app.feedbackforensics.com?data=chatbot...
ποΈ 1. Visualizing dataset differences
How does Chatbot Arena differ from Anthropic Helpful data? β Prefers less polite but better formatted responses
app.feedbackforensics.com?data=chatbot...
π΅π»π¬ Introducing Feedback Forensics: a new tool to investigate pairwise preference data.
Feedback data is notoriously difficult to interpret and has many known issues β our app aims to help!
Try it at app.feedbackforensics.com
Three example use-cases ππ§΅