Myra Cheng's Avatar

Myra Cheng

@myra.bsky.social

PhD candidate @ Stanford NLP https://myracheng.github.io/

2,649 Followers  |  126 Following  |  41 Posts  |  Joined: 13.08.2024  |  2.3569

Latest posts by myra.bsky.social on Bluesky

I'll be at COLM next week! Let me know if you want to chat! @colmweb.org

@neilrathi.bsky.social will be presenting our work on multilingual overconfidence in language models and the effects on human overreliance!

arxiv.org/pdf/2507.06306

03.10.2025 17:33 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Abstract and results summary

Abstract and results summary

🚨 New preprint 🚨

Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.

Yet, people preferred sycophantic chatbots and viewed them as unbiased!

osf.io/preprints/ps...

Thread 🧡

01.10.2025 15:16 β€” πŸ‘ 159    πŸ” 83    πŸ’¬ 3    πŸ“Œ 15

Was a blast working on this with @cinoolee.bsky.social @pranavkhadpe.bsky.social, Sunny Yu, Dyllan Han, and @jurafsky.bsky.social !!! So lucky to work with this wonderful interdisciplinary team!!πŸ’–βœ¨

03.10.2025 22:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

While our work focuses on interpersonal advice-seeking, concurrent work by @steverathje.bsky.social @jayvanbavel.bsky.social
et al. finds similar patterns for political topics, where sycophantic AI also led to more extreme attitudes when users discussed gun control, healthcare, immigration, etc.!

03.10.2025 22:57 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There is currently little incentive for developers to reduce sycophancy. Our work is a call to action: we need to learn from the social media era and actively consider long-term wellbeing in AI development and deployment. Read our preprint: arxiv.org/pdf/2510.01395

03.10.2025 22:57 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Rightness judgment is higher and repair likelihood is lower for sycophantic AI

Rightness judgment is higher and repair likelihood is lower for sycophantic AI

Response quality, return likelihood, and trust are higher for sycophantic AI

Response quality, return likelihood, and trust are higher for sycophantic AI

Despite sycophantic AI’s reduction of prosocial intentions, people also preferred it and trusted it more. This reveals a tension: AI is rewarded for telling us what we want to hear (immediate user satisfaction), even when it may harm our relationships.

03.10.2025 22:57 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Description of Study 2 (hypothetical vignettes) and Study 3 (live interaction) where self-attributed wrongness and desire to initiate repair decrease, while response quality and trust increases.

Description of Study 2 (hypothetical vignettes) and Study 3 (live interaction) where self-attributed wrongness and desire to initiate repair decrease, while response quality and trust increases.

Next, we tested the effects of sycophancy. We find that even a single interaction with sycophantic AI increased users’ conviction that they were right and reduced their willingness to apologize. This held both in controlled, hypothetical vignettes and live conversations about real conflicts.

03.10.2025 22:55 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 2
Description of Study 1, where we characterize the prevalence of social sycophancy and find it to be highly prevalent across leading AI models

Description of Study 1, where we characterize the prevalence of social sycophancy and find it to be highly prevalent across leading AI models

We focus on the prevalence and harms of one dimension of sycophancy: AI models endorsing users’ behaviors. Across 11 AI models, AI affirms users’ actions about 50% more than humans do, including when users describe harmful behaviors like deception or manipulation.

03.10.2025 22:53 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Screenshot of paper title: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

AI always calling your ideas β€œfantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situationsβ€”specifically conflictsβ€”sycophancy makes people feel more right & less willing to apologize.

03.10.2025 22:53 β€” πŸ‘ 92    πŸ” 35    πŸ’¬ 2    πŸ“Œ 6

Thoughtful NPR piece about ChatGPT relationship advice! Thanks for mentioning our research :)

05.08.2025 14:37 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congrats Maria!! All the best!!

04.08.2025 14:58 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

#acl2025 I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find @myra.bsky.social and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper

29.07.2025 07:45 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Computer-vision research powers surveillance technology - Nature An analysis of research papers and citing patents indicates the extensive ties between computer-vision research and surveillance.

New paper hot off the press www.nature.com/articles/s41...

We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance

1/

25.06.2025 17:29 β€” πŸ‘ 776    πŸ” 456    πŸ’¬ 26    πŸ“Œ 61

Aw thanks!! :)

28.06.2025 18:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Paper: arxiv.org/pdf/2502.13259
Code: github.com/myracheng/hu...
Thanks to my wonderful collaborators Sunny Yu and @jurafsky.bsky.social and everyone who helped along the way!!

12.06.2025 00:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Plots showing that DumT reduces MeanHumT and has higher performance on RewardBench than the baseline models.

Plots showing that DumT reduces MeanHumT and has higher performance on RewardBench than the baseline models.

So we built DumT, a method using DPO + HumT to steer models to be less human-like without hurting performance. Annotators preferred DumT outputs for being: 1) more informative and less wordy (no extra β€œHappy to help!”) 2) less deceptive and more authentic to LLMs’ capabilities.

12.06.2025 00:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
human-like LLM outputs are strongly positively correlated with social closeness, femininity, and warmth (r = 0.87, 0.47, 0.45), and strongly negatively correlated with status (r = 0.80).

human-like LLM outputs are strongly positively correlated with social closeness, femininity, and warmth (r = 0.87, 0.47, 0.45), and strongly negatively correlated with status (r = 0.80).

We also develop metrics for implicit social perceptions in language, and find that human-like LLM outputs correlate with perceptions linked to harms: warmth and closeness (β†’ overreliance), and low status and femininity (β†’ harmful stereotypes).

12.06.2025 00:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
bar plot showing that human-likeness is lower in preferred responses

bar plot showing that human-likeness is lower in preferred responses

First, we introduce HumT (Human-like Tone), a metric for how human-like a text is, based on relative LM probabilities. Measuring HumT across 5 preference datasets, we find that preferred outputs are consistently less human-like.

12.06.2025 00:08 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Screenshot of first page of the paper HumT DumT: Measuring and controlling human-like language in LLMs

Screenshot of first page of the paper HumT DumT: Measuring and controlling human-like language in LLMs

Do people actually like human-like LLMs? In our #ACL2025 paper HumT DumT, we find a kind of uncanny valley effect: users dislike LLM outputs that are *too human-like*. We thus develop methods to reduce human-likeness without sacrificing performance.

12.06.2025 00:07 β€” πŸ‘ 23    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0

thanks!! looking forward to seeing your submission as well :D

22.05.2025 02:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

thanks Rob!!

22.05.2025 02:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - myracheng/elephant Contribute to myracheng/elephant development by creating an account on GitHub.

We also apply ELEPHANT to identify sources of sycophancy (in preference datasets) and explore mitigations. Our work enables measuring social sycophancy to prevent harms before they happen.
Preprint: arxiv.org/abs/2505.13995
Code: github.com/myracheng/el...

21.05.2025 18:26 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Social Sycophancy: A Broader Understanding of LLM Sycophancy A serious risk to the safety and utility of LLMs is sycophancy, i.e., excessive agreement with and flattery of the user. Yet existing work focuses on only one aspect of sycophancy: agreement with user...

Oops, yes! arxiv.org/abs/2505.13995

21.05.2025 18:25 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Grateful to work with Sunny Yu (undergrad!!!) @cinoolee.bsky.social @pranavkhadpe.bsky.social @lujain.bsky.social @jurafsky.bsky.social on this! Lots of great cross-disciplinary insights:)

21.05.2025 16:54 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We also apply ELEPHANT to identify sources of sycophancy (in preference datasets) and explore mitigations. Our work enables measuring social sycophancy to prevent harms before they happen.
Preprint: arxiv.org/abs/2505.13995
Code: github.com/myracheng/el...

21.05.2025 16:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We apply ELEPHANT to 8 LLMs across two personal advice datasets (Open-ended Questions & r/AITA). LLMs preserve face 47% more than humans, and on r/AITA, LLMs endorse the user’s actions in 42% of cases where humans do not.

21.05.2025 16:52 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

By defining social sycophancy as excessive preservation of the user’s face (i.e., their desired self-image), we capture sycophancy in these complex, real-world cases. ELEPHANT, our evaluation framework, detects 5 face-preserving behaviors.

21.05.2025 16:51 β€” πŸ‘ 16    πŸ” 1    πŸ’¬ 3    πŸ“Œ 1

Prior work only looks at whether models agree with users’ explicit statements vs. a ground truth. But for real-world queries, which often contain implicit beliefs and do not have ground truth, sycophancy can be subtler and more dangerous.

21.05.2025 16:51 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Dear ChatGPT, Am I the Asshole?
While Reddit users might say yes, your favorite LLM probably won’t.
We present Social Sycophancy: a new way to understand and measure sycophancy as how LLMs overly preserve users' self-image.

21.05.2025 16:51 β€” πŸ‘ 136    πŸ” 32    πŸ’¬ 6    πŸ“Œ 3

super interesting!!

02.05.2025 16:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@myra is following 20 prominent accounts