I'll be at COLM next week! Let me know if you want to chat! @colmweb.org
@neilrathi.bsky.social will be presenting our work on multilingual overconfidence in language models and the effects on human overreliance!
arxiv.org/pdf/2507.06306
@myra.bsky.social
PhD candidate @ Stanford NLP https://myracheng.github.io/
I'll be at COLM next week! Let me know if you want to chat! @colmweb.org
@neilrathi.bsky.social will be presenting our work on multilingual overconfidence in language models and the effects on human overreliance!
arxiv.org/pdf/2507.06306
Abstract and results summary
π¨ New preprint π¨
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread π§΅
Was a blast working on this with @cinoolee.bsky.social @pranavkhadpe.bsky.social, Sunny Yu, Dyllan Han, and @jurafsky.bsky.social !!! So lucky to work with this wonderful interdisciplinary team!!πβ¨
03.10.2025 22:58 β π 1 π 0 π¬ 0 π 0While our work focuses on interpersonal advice-seeking, concurrent work by @steverathje.bsky.social @jayvanbavel.bsky.social
et al. finds similar patterns for political topics, where sycophantic AI also led to more extreme attitudes when users discussed gun control, healthcare, immigration, etc.!
There is currently little incentive for developers to reduce sycophancy. Our work is a call to action: we need to learn from the social media era and actively consider long-term wellbeing in AI development and deployment. Read our preprint: arxiv.org/pdf/2510.01395
03.10.2025 22:57 β π 7 π 1 π¬ 1 π 0Rightness judgment is higher and repair likelihood is lower for sycophantic AI
Response quality, return likelihood, and trust are higher for sycophantic AI
Despite sycophantic AIβs reduction of prosocial intentions, people also preferred it and trusted it more. This reveals a tension: AI is rewarded for telling us what we want to hear (immediate user satisfaction), even when it may harm our relationships.
03.10.2025 22:57 β π 3 π 1 π¬ 1 π 0Description of Study 2 (hypothetical vignettes) and Study 3 (live interaction) where self-attributed wrongness and desire to initiate repair decrease, while response quality and trust increases.
Next, we tested the effects of sycophancy. We find that even a single interaction with sycophantic AI increased usersβ conviction that they were right and reduced their willingness to apologize. This held both in controlled, hypothetical vignettes and live conversations about real conflicts.
03.10.2025 22:55 β π 8 π 3 π¬ 1 π 2Description of Study 1, where we characterize the prevalence of social sycophancy and find it to be highly prevalent across leading AI models
We focus on the prevalence and harms of one dimension of sycophancy: AI models endorsing usersβ behaviors. Across 11 AI models, AI affirms usersβ actions about 50% more than humans do, including when users describe harmful behaviors like deception or manipulation.
03.10.2025 22:53 β π 5 π 0 π¬ 1 π 0Screenshot of paper title: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
AI always calling your ideas βfantasticβ can feel inauthentic, but what are sycophancyβs deeper harms? We find that in the common use case of seeking AI advice on interpersonal situationsβspecifically conflictsβsycophancy makes people feel more right & less willing to apologize.
03.10.2025 22:53 β π 92 π 35 π¬ 2 π 6Thoughtful NPR piece about ChatGPT relationship advice! Thanks for mentioning our research :)
05.08.2025 14:37 β π 11 π 0 π¬ 0 π 0Congrats Maria!! All the best!!
04.08.2025 14:58 β π 3 π 0 π¬ 0 π 0#acl2025 I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find @myra.bsky.social and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper
29.07.2025 07:45 β π 11 π 1 π¬ 0 π 0New paper hot off the press www.nature.com/articles/s41...
We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance
1/
Aw thanks!! :)
28.06.2025 18:19 β π 1 π 0 π¬ 0 π 0Paper: arxiv.org/pdf/2502.13259
Code: github.com/myracheng/hu...
Thanks to my wonderful collaborators Sunny Yu and @jurafsky.bsky.social and everyone who helped along the way!!
Plots showing that DumT reduces MeanHumT and has higher performance on RewardBench than the baseline models.
So we built DumT, a method using DPO + HumT to steer models to be less human-like without hurting performance. Annotators preferred DumT outputs for being: 1) more informative and less wordy (no extra βHappy to help!β) 2) less deceptive and more authentic to LLMsβ capabilities.
12.06.2025 00:09 β π 2 π 0 π¬ 1 π 1human-like LLM outputs are strongly positively correlated with social closeness, femininity, and warmth (r = 0.87, 0.47, 0.45), and strongly negatively correlated with status (r = 0.80).
We also develop metrics for implicit social perceptions in language, and find that human-like LLM outputs correlate with perceptions linked to harms: warmth and closeness (β overreliance), and low status and femininity (β harmful stereotypes).
12.06.2025 00:08 β π 1 π 0 π¬ 2 π 0bar plot showing that human-likeness is lower in preferred responses
First, we introduce HumT (Human-like Tone), a metric for how human-like a text is, based on relative LM probabilities. Measuring HumT across 5 preference datasets, we find that preferred outputs are consistently less human-like.
12.06.2025 00:08 β π 3 π 1 π¬ 1 π 0Screenshot of first page of the paper HumT DumT: Measuring and controlling human-like language in LLMs
Do people actually like human-like LLMs? In our #ACL2025 paper HumT DumT, we find a kind of uncanny valley effect: users dislike LLM outputs that are *too human-like*. We thus develop methods to reduce human-likeness without sacrificing performance.
12.06.2025 00:07 β π 23 π 6 π¬ 1 π 0thanks!! looking forward to seeing your submission as well :D
22.05.2025 02:57 β π 1 π 0 π¬ 0 π 0thanks Rob!!
22.05.2025 02:56 β π 0 π 0 π¬ 0 π 0We also apply ELEPHANT to identify sources of sycophancy (in preference datasets) and explore mitigations. Our work enables measuring social sycophancy to prevent harms before they happen.
Preprint: arxiv.org/abs/2505.13995
Code: github.com/myracheng/el...
Oops, yes! arxiv.org/abs/2505.13995
21.05.2025 18:25 β π 3 π 0 π¬ 0 π 0Grateful to work with Sunny Yu (undergrad!!!) @cinoolee.bsky.social @pranavkhadpe.bsky.social @lujain.bsky.social @jurafsky.bsky.social on this! Lots of great cross-disciplinary insights:)
21.05.2025 16:54 β π 7 π 0 π¬ 1 π 0We also apply ELEPHANT to identify sources of sycophancy (in preference datasets) and explore mitigations. Our work enables measuring social sycophancy to prevent harms before they happen.
Preprint: arxiv.org/abs/2505.13995
Code: github.com/myracheng/el...
We apply ELEPHANT to 8 LLMs across two personal advice datasets (Open-ended Questions & r/AITA). LLMs preserve face 47% more than humans, and on r/AITA, LLMs endorse the userβs actions in 42% of cases where humans do not.
21.05.2025 16:52 β π 6 π 1 π¬ 2 π 0By defining social sycophancy as excessive preservation of the userβs face (i.e., their desired self-image), we capture sycophancy in these complex, real-world cases. ELEPHANT, our evaluation framework, detects 5 face-preserving behaviors.
21.05.2025 16:51 β π 16 π 1 π¬ 3 π 1Prior work only looks at whether models agree with usersβ explicit statements vs. a ground truth. But for real-world queries, which often contain implicit beliefs and do not have ground truth, sycophancy can be subtler and more dangerous.
21.05.2025 16:51 β π 9 π 1 π¬ 1 π 0Dear ChatGPT, Am I the Asshole?
While Reddit users might say yes, your favorite LLM probably wonβt.
We present Social Sycophancy: a new way to understand and measure sycophancy as how LLMs overly preserve users' self-image.
super interesting!!
02.05.2025 16:49 β π 1 π 0 π¬ 0 π 0