Navita Goyal's Avatar

Navita Goyal

@navitagoyal.bsky.social

PhD student @umdcs, Member of @ClipUmd lab | Earlier @AdobeResearch, @IITRoorkee

281 Followers  |  189 Following  |  7 Posts  |  Joined: 15.11.2024  |  2.3268

Latest posts by navitagoyal.bsky.social on Bluesky

The Medium Is Not the Message: Deconfounding Document Embeddings via Linear Concept Erasure — Tuesday at 11:00, Poster

 Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification — Tuesday at 14:30, Demo

Measuring Scalar Constructs in Social Science with LLMs — Friday at 10:30, Oral at CSS


How Persuasive is Your Context? — Friday at 14:00, Poster

The Medium Is Not the Message: Deconfounding Document Embeddings via Linear Concept Erasure — Tuesday at 11:00, Poster Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification — Tuesday at 14:30, Demo Measuring Scalar Constructs in Social Science with LLMs — Friday at 10:30, Oral at CSS How Persuasive is Your Context? — Friday at 14:00, Poster

Happy to be at #EMNLP2025! Please say hello and come see our lovely work

05.11.2025 02:23 — 👍 8    🔁 1    💬 0    📌 0

I am recruiting PhD students to start in 2026! If you are interested in robustness, training dynamics, interpretability for scientific understanding, or the science of LLM analysis you should apply. BU is building a huge LLM analysis/interp group and you’ll be joining at the ground floor.

16.10.2025 15:45 — 👍 58    🔁 19    💬 1    📌 1

This is a great use case of linear erasure! It's always exciting to see interesting applications of these techniques :)

24.09.2025 18:45 — 👍 1    🔁 0    💬 1    📌 0

Congrats! 🎉 Very excited to follow your lab's work

19.08.2025 21:50 — 👍 1    🔁 0    💬 1    📌 0

Congratulations and welcome to Maryland!! 🎉

30.05.2025 16:30 — 👍 1    🔁 0    💬 1    📌 0
Post image

I'll be presenting this work with @rachelrudinger at #NAACL2025 tomorrow (Wednesday 4/30) in Albuquerque during Session C (Oral/Poster 2) at 2pm! 🔬

Decomposing hypotheses in traditional NLI and defeasible NLI helps us measure various forms of consistency of LLMs. Come join us!

29.04.2025 20:40 — 👍 8    🔁 3    💬 5    📌 1
Post image

What does it mean for #LLM output to be novel?
In work w/ johnchen6.bsky.social, Jane Pan, Valerie Chen and He He, we argue it needs to be both original and high quality. While prompting tricks trade one for the other, better models (scaling/post-training) can shift the novelty frontier 🧵

29.04.2025 16:35 — 👍 7    🔁 4    💬 2    📌 0

This option is available on the menu (three dots) next to the comment/repost/like section. I only see this when I am in the Discover feed though; not on my regular feed

27.04.2025 16:16 — 👍 2    🔁 0    💬 0    📌 0
Post image

🚨 New Paper 🚨

1/ We often assume that well-written text is easier to translate ✏️

But can #LLMs automatically rewrite inputs to improve machine translation? 🌍

Here’s what we found 🧵

17.04.2025 01:32 — 👍 8    🔁 4    💬 1    📌 0
Preview
Can you map it to English? The Role of Cross-Lingual Alignment in Multilingual Performance of LLMs Large language models (LLMs) pre-trained predominantly on English text exhibit surprising multilingual capabilities, yet the mechanisms driving cross-lingual generalization remain poorly understood. T...

🔈 NEW PAPER 🔈
Excited to share my paper that analyzes the effect of cross-lingual alignment on multilingual performance
Paper: arxiv.org/abs/2504.09378 🧵

18.04.2025 15:00 — 👍 0    🔁 2    💬 1    📌 0

Have work on the actionable impact of interpretability findings? Consider submitting to our Actionable Interpretability workshop at ICML! See below for more info.

Website: actionable-interpretability.github.io
Deadline: May 9

03.04.2025 17:58 — 👍 20    🔁 10    💬 0    📌 0

Thinking about paying $20k/month for a "PhD-level AI agent"? You might want to wait until their web browsing skills are on par with those of human PhD students 😛 Check out our new BEARCUBS benchmark, which shows web agents struggle to perform simple multimodal browsing tasks!

12.03.2025 16:08 — 👍 6    🔁 1    💬 0    📌 0

🚨 Our team at UMD is looking for participants to study how #LLM agent plans can help you answer complex questions

💰 $1 per question
🏆 Top-3 fastest + most accurate win $50
⏳ Questions take ~3 min => $20/hr+

Click here to sign up (please join, reposts appreciated 🙏): preferences.umiacs.umd.edu

11.03.2025 14:30 — 👍 2    🔁 3    💬 0    📌 0

This is called going above and beyond for job assigned to you.

26.02.2025 01:28 — 👍 2    🔁 0    💬 1    📌 0
Preview
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong Large Language Models (LLMs) are increasingly used for accessing information on the web. Their truthfulness and factuality are thus of great interest. To help users make the right decisions about the ...

Our paper on studying over-reliance in claim verification with the help of an LLM assistance arxiv.org/abs/2310.12558

Re mitigation: we find that showing users contrastive explanations—reasoning both why a claim may be true and why it may be false—helps counter over-reliance to some extent.

25.02.2025 16:43 — 👍 4    🔁 0    💬 0    📌 0
Post image

🚨 New Position Paper 🚨

Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬

We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠

Here's why MCQA evals are broken, and how to fix them 🧵

24.02.2025 21:03 — 👍 46    🔁 13    💬 2    📌 0

How can we generate synthetic data for a task that requires global reasoning over a long context (e.g., verifying claims about a book)? LLMs aren't good at *solving* such tasks, let alone generating data for them. Check out our paper for a compression-based solution!

21.02.2025 16:37 — 👍 17    🔁 4    💬 0    📌 0
Post image

This paper is really cool. They decompose NLI (and defeasible NLI) hypotheses into atoms, and then use these atoms to measure the logical consistency of LLMs.

E.g. for an entailment NLI example, each hypothesis atom should also be entailed by the premise.

Very nice idea 👏👏

18.02.2025 16:14 — 👍 15    🔁 3    💬 2    📌 0
Logo for TRAILS depicting a variety of sociotechnical settings in which AI is used.

Logo for TRAILS depicting a variety of sociotechnical settings in which AI is used.

Please join us for:
AI at Work: Building and Evaluating Trust

Presented by our Trustworthy AI in Law & Society (TRIALS) institute.

Feb 3-4
Washington DC

Open to all!

Details and registration at: trails.gwu.edu/trailscon-2025
Sponsorship details at: trails.gwu.edu/media/556

16.01.2025 15:20 — 👍 16    🔁 7    💬 0    📌 0
meme with three rows.

"this human-ai decision making leads to unfair outcomes" --> "panik"

"let's show explanations to help people be more fair" --> "kalm"

"those explanations are based on proxy features" --> "panik"

meme with three rows. "this human-ai decision making leads to unfair outcomes" --> "panik" "let's show explanations to help people be more fair" --> "kalm" "those explanations are based on proxy features" --> "panik"

The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features

Despite hopes that explanations improve fairness, we see that when biases are hidden behind proxy features, explanations may not help.

Navita Goyal, Connor Baumler +al IUI’24
hal3.name/docs/daume23...
>

09.12.2024 11:41 — 👍 21    🔁 6    💬 1    📌 0
Preview
Causal Effect of Group Diversity on Redundancy and Coverage in Peer-Reviewing A large host of scientific journals and conferences solicit peer reviews from multiple reviewers for the same submission, aiming to gather a broader range of perspectives and mitigate individual biase...

This is my first time serving as an AC for a big conference.

Just read this great work by Goyal et al. arxiv.org/abs/2411.11437

I'm optimizing for high coverage and low redundancy—assigning reviewers based on relevant topics or affinity scores alone feels off. Seniority and diversity matter!

05.12.2024 00:44 — 👍 5    🔁 2    💬 1    📌 0
meme with a car veering away from « bad answers from search » to « bad answers from chatbots »

meme with a car veering away from « bad answers from search » to « bad answers from chatbots »

Large Language Models Help Humans Verify Truthfulness—Except When They Are Convincingly Wrong

Should one use chatbots or web search to fact check? Chatbots help more on avg, but people uncritically accept their suggestions much more often.

by Chenglei Si +al NAACL’24

hal3.name/docs/daume24...
>

03.12.2024 09:30 — 👍 30    🔁 5    💬 1    📌 0

🙋‍♀️

20.11.2024 11:31 — 👍 1    🔁 0    💬 0    📌 0

@navitagoyal is following 20 prominent accounts