's Avatar

@aakriti1kumar.bsky.social

22 Followers  |  39 Following  |  11 Posts  |  Joined: 27.11.2024  |  1.634

Latest posts by aakriti1kumar.bsky.social on Bluesky

There’s a lot more detail in the full paper, and I would love to hear your thoughts and feedback on it!

Check out the preprint here: arxiv.org/pdf/2506.10150

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Huge thanks to my amazing collaborators: Fai Poungpeth, @diyiyang.bsky.social, Erina Farrell, @brucelambert.bsky.social, and @mattgroh.bsky.social πŸ™Œ

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

LLMs, when benchmarked against reliable expert judgments, can be reliable tools for overseeing emotionally sensitive AI applications.

Our results show we can use LLMs-as-judge to monitor LLMs-as-companion!

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For example, in one of the conversations in our dataset, a response that an expert saw as "dismissing” the speaker’s emotions, a crowdworker interpreted as "validating" their emotions instead!

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These misjudgments from crowdworkers have huge implications for AI training and deployment❌

If we use flawed evaluations to train and monitor "empathic" AI, we risk creating systems that propagate a broken standard of what good communication looks like.

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

So why the gap between experts/LLMs and crowds?

Crowdworkers often
- have limited attention
- rely on heuristics like β€œit’s the thought that counts”
- focusing on intentions rather than actual wording
show systematic rating inflation due to social desirability bias

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

And when experts disagree, LLMs struggle to find a consistent signal too.

Here’s how expert agreement (Krippendorff's alpha) varied across empathy sub-components:

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But here’s the catch: LLMs are reliable when experts are reliable.

The reliability of expert judgments depends on the clarity of the construct. For nuanced, subjective components of empathic communication, experts often disagree.

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We analyzed thousands of annotations from LLMs, crowdworkers, and experts on 200 real-world conversations

And specifically looked at 21 sub-components of empathic communication from 4 evaluative frameworks

The result? LLMs consistently matched expert judgments better than crowdworkers did! πŸ”₯

17.06.2025 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How do we reliably judge if AI companions are performing well on subjective, context-dependent, and deeply human tasks? πŸ€–

Excited to share the first paper from my postdoc (!!) investigating when LLMs are reliable judges - with empathic communication as a case study 🧐

πŸ§΅πŸ‘‡

17.06.2025 15:13 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

Super cool opportunity to work with brilliant scientists and fantastic mentors @mattgroh.bsky.social and Dashun Wang 🌟🌟

Feel free to reach out!

02.04.2025 14:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Decision-Point Guided Safe Policy Improvement Within batch reinforcement learning, safe policy improvement (SPI) seeks to ensure that the learnt policy performs at least as well as the behavior policy that generated the dataset. The core challeng...

Our paper: Decision-Point Guided Safe Policy Improvement
We show that a simple approach to learn safe RL policies can outperform most offline RL methods. (+theoretical guarantees!)

How? Just allow the state-actions that have been seen enough times! 🀯

arxiv.org/abs/2410.09361

23.01.2025 18:23 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@aakriti1kumar is following 20 prominent accounts