Asher Zheng's Avatar

Asher Zheng

@asher-zheng.bsky.social

PhD @ UT Linguistics Semantics/Pragmatics/NLP https://asherz720.github.io/ Prev.@UoEdinburgh @Hanyang

162 Followers  |  169 Following  |  10 Posts  |  Joined: 22.11.2024  |  1.6759

Latest posts by asher-zheng.bsky.social on Bluesky

Preview
CoBRA: Quantifying Strategic Language Use and LLM Pragmatics Language is often used strategically, particularly in high-stakes, adversarial settings, yet most work on pragmatics and LLMs centers on cooperativity. This leaves a gap in systematic understanding of...

Current LLMs are not able to jailbreak cooperative principles and still show limited understanding of strategic language. We believe this work lays foundations for sophisticated strategic reasoning and safety monitoring in downstream tasks.
๐Ÿ“„: arxiv.org/abs/2506.01195

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

By analyzing model reasoning, we find extra reasoning introduces overcomplication (img left), misunderstanding, and internal inconsistency (img right). This shows the current LLMs still lack sophisticated pragmatic understanding in many ways.

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We evaluate a range of LLMs in terms of how good they are at perceiving strategic language. We show models struggle with our metrics while showing an overall good understanding of Gricean principles. Model size tends to have a positive effect, while reasoning does not help.

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

(2) BaT and PaT are valid terms that reflect strategic gains/losses, which can to some extent predict conversational outcomes. In addition, our metrics are more objective. When conditioned on cases where the outcome is made based on logical arguments, the predictive power rises.

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

We also introduce CHARM, an annotated dataset of real legal cross-examination dialogues. By applying our framework, we show (1) (non-)cooperative discourse are distinct over the identified properties (img left), and BaT and PaT show such a distributional distinction (img right).

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Based on the components above, we introduce three metricsโ€”Benefit at Turn (BaT), Penalty at Turn (PaT), and Normalized Relative Benefit at Turn (NRBaT)โ€”to measure the strategic gains, losses, and cumulative benefits at a turn.

03.06.2025 11:56 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

For example, one witness can make a commitment that leads to a win for her but violates the maxim of manner to make her less liable to the commitment. The commitment itself is beneficial, but the gains are reduced due to vagueness.

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We derive non-cooperativity from both Gricean and game-theoretic pragmatics. In our framework, a strategic move is evaluated based on two components: the commitment it expresses (base value) and the violation of maxims to maintain consistency (penalties/compensations).

03.06.2025 11:56 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Language is often strategic, but LLMs tend to play nice. How strategic are they really? Probing into that is key for future safety alignment.

๐Ÿ‘‰Introducing CoBRA๐Ÿ, a framework that assesses strategic language.

Work with my amazing advisors @jessyjli.bsky.social and @David I. Beaver!

03.06.2025 11:56 โ€” ๐Ÿ‘ 11    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image Post image

Have that eerie feeling of dรฉjร  vu when reading model-generated text ๐Ÿ‘€, but canโ€™t pinpoint the specific words or phrases ๐Ÿ‘€?

โœจWe introduce QUDsim, to quantify discourse similarities beyond lexical, syntactic, and content overlap.

21.04.2025 21:29 โ€” ๐Ÿ‘ 21    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3

๐Ÿ™‹โ€โ™‚๏ธ

25.11.2024 14:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@asher-zheng is following 20 prominent accounts