Now accepted to #neurips25 datasets & benchmarks!
See you in San Diego! π₯³
@valentinapy.bsky.social
Postdoc in AI at the Allen Institute for AI & the University of Washington. π https://valentinapy.github.io
Now accepted to #neurips25 datasets & benchmarks!
See you in San Diego! π₯³
π Can open science beat closed AI? TΓΌlu 3 makes a powerful case. In our new #WiAIRpodcast, we speak with Valentina Pyatkin (@valentinapy.bsky.social) of @ai2.bsky.social and the University of Washington about a fully open post-training recipeβmodels, data, code, evals, and infra. #WomenInAI 1/8π§΅
19.09.2025 16:13 β π 5 π 1 π¬ 1 π 0"πππ ππ¨π¬π-ππ«ππ’π§π’π§π : ππ©ππ§ πππ’ππ§ππ ππ‘ππ ππ¨π°ππ«π¬ ππ«π¨π π«ππ¬π¬ " ποΈ
On Sept 17, the #WiAIRpodcast speaks with @valentinapy.bsky.social (@ai2.bsky.social & University of Washington) about open science, post-training, mentorship, and visibility
#WiAIR #NLProc
With fresh support of $75M from NSF and $77M from NVIDIA, weβre set to scale our open model ecosystem, bolster the infrastructure behind it, and fastβtrack reproducible AI research to unlock the next wave of scientific discovery. π‘
14.08.2025 12:16 β π 45 π 7 π¬ 1 π 6On my way to Oxford: Looking forward to speaking at OxML 2025
10.08.2025 08:09 β π 8 π 0 π¬ 0 π 0The submission deadline is August 26 2025 (AoE time), and decisions will be sent out on September 2, 2025.
Submit your abstracts here:
docs.google.com/forms/d/e/1F...
πFor the SoLaR workshop
@COLM_conf
we are soliciting opinion abstracts to encourage new perspectives and opinions on responsible language modeling, 1-2 of which will be selected to be presented at the workshop.
Please use the google form below to submit your opinion abstract β¬οΈ
I had a lot of fun contemplating about memorization questions at the @l2m2workshop.bsky.social panel yesterday together with Niloofar Mireshghallah and Reza Shokri, moderated by
@pietrolesci.bsky.social who did a fantastic job!
#ACL2025
I'll be at #ACL2025π¦πΉ!!
Would love to chat about all things pragmatics π§ , redefining "helpfulness"π€ and enabling better cross-cultural capabilities πΊοΈ π«Ά
Presenting our work on culturally offensive nonverbal gestures π
πWed @ Poster Session 4
πHall 4/5, 11:00-12:30
I did! very very good!!
19.07.2025 05:19 β π 1 π 0 π¬ 0 π 0π₯tokenization panel!
18.07.2025 22:45 β π 7 π 0 π¬ 0 π 0why is vancouver sushi so good? π€€ (vancouver food in general actually)
18.07.2025 20:27 β π 9 π 0 π¬ 3 π 0This week is #ICML in Vancouver, and a number of our researchers are participating. Here's the full list of Ai2's conference engagementsβwe look forward to connecting with fellow attendees. π
14.07.2025 19:30 β π 3 π 2 π¬ 0 π 0book a slot for a chat on my cal:
cal.com/valentinap/i...
Let me know if you want to meet up! Always happy to chat!
11.07.2025 14:09 β π 0 π 0 π¬ 1 π 007/17, Poster: Diverging Preferences: When do Annotators Disagree and do Models Know? icml.cc/virtual/2025...
07/16, Poster: SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior
icml.cc/virtual/2025...
I'll be at ICML in Vancouver next week! #ICML2025
You can find me at the following:
- giving an invited talk at the "Models of Human Feedback for AI Alignment" workshop
- giving an invited talk at the "AI for Math" workshop
I'll also present these two papers ‡οΈ
In Genevaπ¨πto attend the International Open-Source LLM Builders Summit and present OLMo and TΓΌlu!
06.07.2025 17:23 β π 10 π 0 π¬ 0 π 0And I can't forget to thank my amazing co-authors! In particular @saumyamalik.bsky.social and Victoria Graf, with whom I looked through so many constraints π
And @natolambert.bsky.social @hanna-nlp.bsky.social @hamishivi.bsky.social @pdasigi.bsky.social @vwxyzjn.bsky.social
We further discuss what happens when you over-optimize on IF-RLVR: the models tend to prioritize the constraint over the actual instruction! And we suggest possible solutions to this problem.
π Paper: buff.ly/1qSA9Pq
π» Code: github.com/allenai/IFBe...
Additionally, we wrote new training constraints and verifier functions and suggest a good recipe for IF-RLVR training for improved generalization.
We find that IF-RLVR generalization works best on base models and when you train on multiple constraints per instruction!
π‘Beyond math/code, instruction following with verifiable constraints is suitable to be learned with RLVR.
But the set of constraints and verifier functions is limited and most models overfit on IFEval.
We introduce IFBench to measure model generalization to unseen constraints.
plus, some fun RL experiments
03.07.2025 18:14 β π 4 π 1 π¬ 1 π 0This new benchmark created by @valentinapy.bsky.social should be the new default replacing IFEval. Some of the best frontier models get <50% and it comes with separate training prompts so people donβt effectively train on test.
Wild gap from o3 > Gemini 2.5 pro of like 30 points.
Introducing IFBench, a benchmark to measure how well AI models follow new, challenging, and diverse verifiable instructions. Top models like Gemini 2.5 Pro or Claude 4 Sonnet are only able to score up to 50%, presenting an open frontier for post-training. π§΅
03.07.2025 18:01 β π 18 π 1 π¬ 1 π 3Check out our take on Chain-of-Thought.
I really like this paper as a survey on the current literature on what CoT is, but more importantly on what it's not.
It also serves as a cautionary tale to the (apparently quite common) misuse of CoT as an interpretable method.
π¨Submission deadline extended to June 27th AoE!π¨
Our reviewer interest form is also open!
See below for more detailsπ
Submit your paper or sign up to review by June 23, 2025
β‘οΈCFP and workshop info: solar-colm.github.io
β‘οΈReviewer sign up: docs.google.com/forms/d/e/1F...
Interested in shaping the progress of responsible AI and meeting leading researchers in the field? SoLaR@COLM 2025 is looking for paper submissions and reviewers!
π€ ML track: algorithms, math, computation
π Socio-technical track: policy, ethics, human participant research
Congrats again!!!
14.06.2025 00:01 β π 1 π 0 π¬ 0 π 0