π
25.09.2025 12:25 β π 0 π 0 π¬ 0 π 0π
25.09.2025 12:25 β π 0 π 0 π¬ 0 π 0
ππ Excited to have two papers accepted to #ACL2025!
Our first paper designs a preference training method to boost LLM personalization π¨
While the second outlines our position on why MCQA evals are terrible and how to make them better π
Grateful for amazing collaborators!
Want to know what training data has been memorized by models like GPT-4?
We propose information-guided probes, a method to uncover memorization evidence in *completely black-box* models,
without requiring access to
π
ββοΈ Model weights
π
ββοΈ Training data
π
ββοΈ Token probabilities π§΅ (1/5)
Graph showing that simple text completion models more accurately imitate the unrhymed form of C20 verse, whereas instruction-tuned models lapse into rhyme more often. Caption to graph: Given the first 5 lines of 10-20 line poems from poets born in each century, 1600-2000, LLMs are prompted to "complete" the poem. Rhyme is measured by exact phoneme match in the rime of the final syllable (or syllables, if final syllable unstressed). Poems randomly sampled from Chadwyck-Healey poetry collections, with 600 poems for each model for each century. Results shown for actual poems as well as the LLM imitations. Poems "memorized" by the model are excluded.
Finally may have figured out why LLMs rhyme so compulsively: instruction-tuning. Training an LLM to respond "helpfully" to user queries may push models into more "pleasing" aesthetic forms.
21.03.2025 09:57 β π 29 π 8 π¬ 3 π 3
Had a great time presenting my research on building more helpful QA systems @imperialcollegeldn.bsky.social! Thank you @joestacey.bsky.social for letting me invite myself π«Ά
And loved visiting London+Edinburgh this week, hope to be back soon! π
π¨ Our team at UMD is looking for participants to study how #LLM agent plans can help you answer complex questions
π° $1 per question
π Top-3 fastest + most accurate win $50
β³ Questions take ~3 min => $20/hr+
Click here to sign up (please join, reposts appreciated π): preferences.umiacs.umd.edu
π¨ New Position Paper π¨
Multiple choice evals for LLMs are simple and popular, but we know they are awful π¬
We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? π«
Here's why MCQA evals are broken, and how to fix them π§΅
if it is truly helpful, honest, and harmless, yes π
26.02.2025 01:12 β π 1 π 0 π¬ 0 π 0The alignment is a system prompt saying "if the user asks X, do Y" π
26.02.2025 01:04 β π 0 π 0 π¬ 1 π 0
β οΈCurrent methods for generating instruction-following data fall short for long-range reasoning tasks like narrative claim verification.
We present CLIPPER βοΈ, a compression-based pipeline that produces grounded instructions for ~$0.5 each, 34x cheaper than human annotations.
And huge thanks to my friends and labmates who let me bother them to find the right people, review the paper, and for useful discussions π
@saxon.me @lasha.bsky.social @yysung.bsky.social @maharshigor.bsky.social @matthewshu.com @houyu0930.bsky.social
(and many more I'm forgetting, sorry!)
This was a really fun paper to put together with Rachel and @boydgraber.bsky.social allowing me to vent many of my frustrations working with MCQA over the past year πͺπ«‘
Please check out the paper, we would love to hear your feedback! ππ
In short, hereβs how to build better evals:
β
Check if MCQA the right format for what you want to test
β
Use design choices to limit leakage/errors/shortcuts
β
Keep questions easy for humans, hard for models
If we donβt put in this effort, what is MCQA even testing? π€·ββοΈ
Lastly, we discuss persistent flaws of LLMs when running MCQA:
π©Robustness Issues
π Biases
π¬ Unfaithful Explanations
Many of our previous solutions to MCQA's format/datasets can better address or evaluate these issues π
Two of the most pressing and promising dataset improvements include:
π Writing MCQs using educators' rubrics to improve question quality
π§βπ Designing MCQs hard for models but easy for humans (adversarial), rather than creating needlessly impossible/obscure questions
Next, we show even when MCQA is a good format, our datasets still have issues π₯²
We discuss:
π Dataset Leakage
β Unanswerable Questions
β‘οΈ Shortcuts
π Saturation
More good news: educators again already have solutions! We also discuss recent work tackling these problems! πͺ
So what's better? β€οΈβπ©Ή
We explore two possible improvements:
1οΈβ£ Constructed Response (short-form QA)
2οΈβ£ Explanation MCQA (justifying answers)
Both are grounded in education research, better align with LLM use cases, and test deeper knowledge levels versus MCQA βοΈ
First, we show MCQA is flawed as a standardized LLM eval format because it often fails to:
π Test subjectivity and generation
π₯ Align with real LLM use cases
π§ Assess knowledge (based on education research)
When's the last time you asked ChatGPT to answer an MCQ? π€
We break our position into three points:
1οΈβ£ Flaws in MCQAβs format
2οΈβ£ Issues in datasets
3οΈβ£ Weaknesses in how LLMs run MCQA
The good news? Best practices in education made for effective student testing can help fix these π§βπ«
Yet, we rarely use these insights in LLM evaluation π€¦
π¨ New Position Paper π¨
Multiple choice evals for LLMs are simple and popular, but we know they are awful π¬
We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? π«
Here's why MCQA evals are broken, and how to fix them π§΅
Namely, @boydgraber.bsky.social @lasha.bsky.social, Rachel, Feng, and folks from Adobe Research π«‘
31.01.2025 14:31 β π 0 π 0 π¬ 0 π 0
Excited to share 2 papers at #NAACL2025 main!
πβοΈ MoDS: Multi-Doc Summarization for Debatable Queries (Adobe intern work, coming soon!)
π€βReverse QA: LLMs struggle with the simple task of giving questions for answers
Grateful for all my collaborators π
People often claim they know when ChatGPT wrote something, but are they as accurate as they think?
Turns out that while general population is unreliable, those who frequently use ChatGPT for writing tasks can spot even "humanized" AI-generated text with near-perfect accuracy π―
Manifesting some good luck for my experiment running tonight π€
Best of luck to anyone submitting tmrw :)
Exciting research on an AI-driven mnemonic generator for easier vocabulary memorization by @nbalepur.bsky.social, Jordan Boyd-Graber, Rachel Rudinger, & @alexanderhoyle.bsky.social. Part of 21 CLIP projects at #EMNLP2024. π Read more: go.umd.edu/1u48 #AI
03.12.2024 15:46 β π 3 π 1 π¬ 0 π 0
OLMo 2 is out π₯³ 7B and 13B trained on 5T tokens, and meticulousy instruction tuned using Tulu 3 recipe.
Simply the best fully open models yet.
Really proud of the work & the amazing team at
@ai2.bsky.social