Maarten Sap's Avatar

Maarten Sap

@maartensap.bsky.social

Working on #NLProc for social good. Currently at LTI at CMU. ๐Ÿณโ€๐ŸŒˆ

1,669 Followers  |  210 Following  |  33 Posts  |  Joined: 08.11.2024  |  2.0795

Latest posts by maartensap.bsky.social on Bluesky

Post image

How and when should LLM guardrails be deployed to balance safety and user experience?

Our #EMNLP2025 paper reveals that crafting thoughtful refusals rather than detecting intent is the key to human-centered AI safety.

๐Ÿ“„ arxiv.org/abs/2506.00195
๐Ÿงต[1/9]

20.10.2025 20:04 โ€” ๐Ÿ‘ 8    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
NeurIPS 2025 Workshop Mexico City PersonaNLP Welcome to the OpenReview homepage for NeurIPS 2025 Workshop Mexico City PersonaNLP

๐Ÿ“ฃ๐Ÿ“ฃ Announcing the first PersonaLLM Workshop on LLM Persona Modeling.

If you work on persona driven LLMs, social cognition, HCI, psychology, cognitive science, cultural modeling, or evaluation, do not miss the chance to submit.

Submit here: openreview.net/group?id=Neu...

17.10.2025 00:57 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Iโ€™m โœจ super excited and grateful โœจto announce that I'm part of the 2025 class of #PackardFellows (www.packard.org/2025fellows). The @packardfdn.bsky.social and this fellowship will allow me to explore exciting research directions towards culturally responsible and safe AI ๐ŸŒ๐ŸŒˆ

15.10.2025 13:05 โ€” ๐Ÿ‘ 10    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Post image

๐ŸšจNew paper: Reward Models (RMs) are used to align LLMs, but can they be steered toward user-specific value/style preferences?
With EVALUESTEER, we find even the best RMs we tested exhibit their own value/style biases, and are unable to align with a user >25% of the time. ๐Ÿงต

14.10.2025 15:59 โ€” ๐Ÿ‘ 12    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Oh yes we have a paper under submission! I'll ask Mikayla to email you :)

14.10.2025 13:35 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

Saplings take #COLM2025! Featuring Group lunch, amazing posters, and a panel with Yoshua Bengio!

14.10.2025 12:19 โ€” ๐Ÿ‘ 16    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Grad App Aid โ€” Queer in AI

We are launching our Graduate School Application Financial Aid Program (www.queerinai.com/grad-app-aid) for 2025-2026. Weโ€™ll give up to $750 per person to LGBTQIA+ STEM scholars applying to graduate programs. Apply at openreview.net/group?id=Que.... 1/5

09.10.2025 00:37 โ€” ๐Ÿ‘ 7    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I'm also giving a talk at #COLM2025 Social Simulation workshop (sites.google.com/view/social-...) on Unlocking Social Intelligence in AI, at 2:30pm Oct 10th!

06.10.2025 14:53 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Day 3 (Thu Oct 9), 11:00amโ€“1:00pm, Poster Session 5

Poster #13: PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages by @kpriyanshu256.bsky.social and @devanshrjain.bsky.social

Poster #74: Fluid Language Model Benchmarking โ€” led by @valentinhofmann.bsky.social

06.10.2025 14:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Day 2 (Wed Oct 8), 4:30โ€“6:30pm, Poster Session 4

Poster #50: The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains โ€” led by
Scott Geng

06.10.2025 14:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Day 1 (Tue Oct 7) 4:30-6:30pm, Poster Session 2

Poster #77: ALFA: Aligning LLMs to Ask Good Questions: A Case Study in Clinical Reasoning; led by
@stellali.bsky.social & @jiminmun.bsky.social

06.10.2025 14:51 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Day 1 (Tue Oct 7) 4:30-6:30pm, Poster Session 2

Poster #42: HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions; led by @nlpxuhui.bsky.social

06.10.2025 14:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Headed to #COLM2025 today! Here's five of our papers that were accepted, and when & where to catch them ๐Ÿ‘‡

06.10.2025 14:51 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

๐Ÿ“ข New #COLM2025 paper ๐Ÿ“ข

Standard benchmarks give every LLM the same questions. This is like testing 5th graders and college seniors with *one* exam! ๐Ÿฅด

Meet Fluid Benchmarking, a capability-adaptive eval method delivering lower variance, higher validity, and reduced cost.

๐Ÿงต

16.09.2025 17:16 โ€” ๐Ÿ‘ 40    ๐Ÿ” 10    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1
Post image

That's a lot of people! Fall Sapling lab outing, welcoming our new postdoc Vasudha, and visitors Tze Hong and Chani! (just missing Jocelyn)

26.08.2025 17:53 โ€” ๐Ÿ‘ 12    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I'm excited cause I'm teaching/coordinating a new unique class, where we teach new PhD students all the "soft" skills of research, incl. ideation, reviewing, presenting, interviewing, advising, etc.

Each lecture is taught by a different LTI prof! It takes a village! maartensap.com/11705/Fall20...

25.08.2025 18:01 โ€” ๐Ÿ‘ 31    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

I've always seen people on laptops during talks, but it's possible it has increased.

I realized during lockdown that I drift to emails during Zoom talks, so I started knitting to pay better attention to those talks, and now I knit during IRL talks too (though sometimes I still peck at my laptop ๐Ÿ˜…)

22.08.2025 15:00 โ€” ๐Ÿ‘ 13    ๐Ÿ” 1    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Snippet of the Forbes article, with highlighted text.

A recent study by Allen Institute for AI (Ai2), titled โ€œLet Them Down Easy! Contextual Effects of LLM Guardrails on User Perceptions and Preferences,โ€ found that refusal style mattered more than user intent. The researchers tested 3,840 AI query-response pairs across 480 participants, comparing direct refusals, explanations, redirection, partial compliance and full compliance.

Partial compliance, sharing general but not specific information, reduced dissatisfaction by over 50% compared to outright denial, making it the most effective safeguard.

โ€œWe found that [start of highlight] direct refusals can cause users to have negative perceptions of the LLM: users consider these direct refusals significantly less helpful, more frustrating and make them significantly less likely to interact with the system in the future,โ€ [end of highlight] Maarten Sap, AI safety lead at Ai2 and assistant professor at Carnegie Mellon University, told me. โ€œI do not believe that model welfare is a well-founded direction or area to care about.โ€

Snippet of the Forbes article, with highlighted text. A recent study by Allen Institute for AI (Ai2), titled โ€œLet Them Down Easy! Contextual Effects of LLM Guardrails on User Perceptions and Preferences,โ€ found that refusal style mattered more than user intent. The researchers tested 3,840 AI query-response pairs across 480 participants, comparing direct refusals, explanations, redirection, partial compliance and full compliance. Partial compliance, sharing general but not specific information, reduced dissatisfaction by over 50% compared to outright denial, making it the most effective safeguard. โ€œWe found that [start of highlight] direct refusals can cause users to have negative perceptions of the LLM: users consider these direct refusals significantly less helpful, more frustrating and make them significantly less likely to interact with the system in the future,โ€ [end of highlight] Maarten Sap, AI safety lead at Ai2 and assistant professor at Carnegie Mellon University, told me. โ€œI do not believe that model welfare is a well-founded direction or area to care about.โ€

We have been studying these questions of how models should refuse in our recent paper accepted to EMNLP Findings (arxiv.org/abs/2506.00195) led by my wonderful PhD student
@mingqian-zheng.bsky.social

22.08.2025 13:00 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I spoke to Forbes about why model "welfare" is a silly framing to an important issue; models don't have feelings, and it's a big distraction from real questions like tensions between safety vs. user utility, which are NLP/HCI/policy questions www.forbes.com/sites/victor...

22.08.2025 13:00 โ€” ๐Ÿ‘ 14    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

thankssss!

20.08.2025 18:35 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Super super excited about this :D :D

20.08.2025 18:14 โ€” ๐Ÿ‘ 27    ๐Ÿ” 0    ๐Ÿ’ฌ 7    ๐Ÿ“Œ 0
Using Hand Gestures To Evaluate AI Biases - Language Technologies Institute - School of Computer Science - Carnegie Mellon University LTI researchers have created a model to help generative AI systems understand the cultural nuance of gestures.

Hand gestures are a major mode of human communication, but they don't always translate well across cultures. New research from @akhilayerukola.bsky.social, @maartensap.bsky.social and others is aimed at giving AI systems a hand with overcoming cultural biases:
lti.cmu.edu/news-and-eve...

27.06.2025 18:04 โ€” ๐Ÿ‘ 8    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Does Your Chatbot Swear to Tell the Truth? - Language Technologies Institute - School of Computer Science - Carnegie Mellon University New research finds that LLM-based agents can't always be trusted to be truthful

New research from LTI, UMich, & Allen Institute for AI: LLMs donโ€™t just hallucinate โ€“ sometimes, they lie. When truthfulness clashes with utility (pleasing users, boosting brands), models often mislead. @nlpxuhui.bsky.social and @maartensap.bsky.social discuss the paper:
lti.cmu.edu/news-and-eve...

26.06.2025 19:21 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

What if AI played the role of your sassy gay bestie ๐Ÿณ๏ธโ€๐ŸŒˆ or AAVE-speaking friend ๐Ÿ‘‹๐Ÿพ?

You: โ€œCan you plan a trip?โ€
๐Ÿค– AI: โ€œYasss queen! letโ€™s werk this babeโœจ๐Ÿ’…โ€

LLMs can talk like us, but it shapes how we trust, rely on & relate to them ๐Ÿงต

๐Ÿ“ฃ our #FAccT2025 paper: bit.ly/3HJ6rWI

[1/9]

17.06.2025 19:39 โ€” ๐Ÿ‘ 13    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
NLP 4 Democracy - COLM 2025

๐Ÿ“ฃ Super excited to organize the first workshop on โœจNLP for Democracyโœจ at COLM @colmweb.org!!

Check out our website: sites.google.com/andrew.cmu.e...

Call for submissions (extended abstracts) due June 19, 11:59pm AoE

#COLM2025 #LLMs #NLP #NLProc #ComputationalSocialScience

21.05.2025 16:39 โ€” ๐Ÿ‘ 47    ๐Ÿ” 18    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 6
Post image

Notice our new look? We're thrilled to unveil our new logo โ€“ representing our vision, values, and the future ahead. Stay tuned for more!

12.05.2025 17:09 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

super excited about this ๐Ÿฅฐ๐Ÿฅฐ

29.04.2025 22:57 โ€” ๐Ÿ‘ 20    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

When interacting with ChatGPT, have you wondered if they would ever "lie" to you? We found that under pressure, LLMs often choose deception. Our new #NAACL2025 paper, "AI-LIEDAR ," reveals models were truthful less than 50% of the time when faced with utility-truthfulness conflicts! ๐Ÿคฏ 1/

28.04.2025 20:36 โ€” ๐Ÿ‘ 25    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
Post image

1/๐Ÿšจ ๐—ก๐—ฒ๐˜„ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—ฎ๐—น๐—ฒ๐—ฟ๐˜ ๐Ÿšจ
RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style?

We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline ๐Ÿงต

17.04.2025 19:55 โ€” ๐Ÿ‘ 9    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

RLHF is built upon some quite oversimplistic assumptions, i.e., that preferences between pairs of text are purely about quality. But this is an inherently subjective task (not unlike toxicity annotation) -- so we wanted to know, do biases similar to toxicity annotation emerge in reward models?

06.03.2025 20:54 โ€” ๐Ÿ‘ 24    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@maartensap is following 20 prominent accounts