Maharshi Gor's Avatar

Maharshi Gor

@maharshigor.bsky.social

PhD student @ Univ of Maryland NLP, Question Answering, Human AI, LLMs More at mgor.info

108 Followers  |  195 Following  |  4 Posts  |  Joined: 09.11.2024  |  1.5168

Latest posts by maharshigor.bsky.social on Bluesky

Preview
Is your benchmark truly adversarial? AdvScore: Evaluating Human-Grounded Adversarialness Adversarial datasets should validate AI robustness by providing samples on which humans perform well, but models do not. However, as models evolve, datasets can become obsolete. Measuring whether a da...

πŸ“ Full paper link: arxiv.org/abs/2406.16342

TL;DR: We introduce AdvScore, a human-grounded metric to measure how "adversarial" a dataset really isβ€”by comparing model vs. human performance. It helps build better, lasting benchmarks like AdvQA (proposed) that evolve with AI progress.

01.05.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ†ADVSCORE won an Outstanding Paper Award at #NAACL2025

🚨 Don't miss out on our poster presentation *today at 2 pm* by Yoo Yeon (first author).

πŸ“Poster Session 5 - HC: Human-centered NLP

πŸ’Ό Highly recommend talking to her if you are hiring and/or interested in Human-focused Al dev and evals!

01.05.2025 12:38 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨 New Position Paper 🚨

Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬

We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠

Here's why MCQA evals are broken, and how to fix them 🧡

24.02.2025 21:03 β€” πŸ‘ 46    πŸ” 13    πŸ’¬ 2    πŸ“Œ 0
meme with three rows.

"this human-ai decision making leads to unfair outcomes" --> "panik"

"let's show explanations to help people be more fair" --> "kalm"

"those explanations are based on proxy features" --> "panik"

meme with three rows. "this human-ai decision making leads to unfair outcomes" --> "panik" "let's show explanations to help people be more fair" --> "kalm" "those explanations are based on proxy features" --> "panik"

The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features

Despite hopes that explanations improve fairness, we see that when biases are hidden behind proxy features, explanations may not help.

Navita Goyal, Connor Baumler +al IUI’24
hal3.name/docs/daume23...
>

09.12.2024 11:41 β€” πŸ‘ 21    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Meme of two muscular arms grasping. The first is labeled "humans" the second "AI systems" and where they grasp is labeled "item response theory."

Meme of two muscular arms grasping. The first is labeled "humans" the second "AI systems" and where they grasp is labeled "item response theory."

Do great minds think alike? Investigating Human-AI Complementarity in QA

We use item response theory to compare the capabilities of 155 people vs 70 chatbots at answering questions, teasing apart complementarities; implications for design.

by Maharshi Gor +al EMNLP’24
hal3.name/docs/daume24...
>

12.12.2024 10:40 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0

πŸ’―

Hallucination is totally the wrong word, implying it is perceiving the world incorrectly.

But it's generating false, plausible sounding statements. Confabulation is literally the perfect word.

So, let's all please start referring to any junk that an LLM makes up as "confabulations".

11.12.2024 14:47 β€” πŸ‘ 205    πŸ” 45    πŸ’¬ 18    πŸ“Œ 8

I used to like writefull when it was new and there nothing else better. But πŸ₯²

12.12.2024 16:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

starter pack for the Computational Linguistics and Information Processing group at the University of Maryland - get all your NLP and data science here!

go.bsky.app/V9qWjEi

10.12.2024 17:14 β€” πŸ‘ 29    πŸ” 12    πŸ’¬ 1    πŸ“Œ 1

πŸ‘‹πŸ½ Hey! 🫑

11.11.2024 05:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@maharshigor is following 19 prominent accounts