Kabir Ahuja's Avatar

Kabir Ahuja

@kabirahuja2431.bsky.social

PhD student @uwnlp.bsky.social

168 Followers  |  595 Following  |  16 Posts  |  Joined: 20.11.2024  |  2.2666

Latest posts by kabirahuja2431.bsky.social on Bluesky

Post image

I had always wanted to work on something that can combine my love for fiction and NLP research, making this project a lot of fun. Huge thanks to the wonderful @melaniesclar.bsky.social and @tsvetshop.bsky.social!

We welcome any feedback and questions -- don't hesitate to reach out!

16/16

22.04.2025 18:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
GitHub - kabirahuja2431/FlawedFictions Contribute to kabirahuja2431/FlawedFictions development by creating an account on GitHub.

FlawedFictions is now available on ๐Ÿค—: huggingface.co/datasets/ka...

Code: github.com/kabirahuja2...

15/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Finding Flawed Fictions: Evaluating Complex Reasoning in Language... Stories are a fundamental aspect of human experience. Engaging deeply with stories and spotting plot holes -- inconsistencies in a storyline that break the internal logic or rules of a story's...

Overall, our work shows that deep narrative understanding/reasoning and generating logically consistent stories remains challenging even for frontier models. Read the full paper for more details: arxiv.org/abs/2504.11900

14/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

But how can story summaries have plot holes? Upon close inspection we find LLMs often omit crucial details in the summary that make subsequent events illogical or inconsistent. This highlights weaknesses in summarizationโ€”a task many consider "solved" with current LLMs.

13/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our results show LLM-generated content contains significantly more plot holes than human-authored stories: 50%+ higher detection rates for summaries and 100%+ increase for contemporary adaptations of classics.

12/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We then assess plot holes in LLM generated text, focusing on tasks of story summarization and contemporary adaptation of classical stories. We use our best model on FlawedFictions to automatically detect the presence of plot holes in LLM generated stories.

11/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

What mistakes do models make while assessing plot holes? Our analysis shows they:
- Misinterpret character motivations
- Incorrectly track entity states
- Miss genre conventions (especially in fantasy)
- Misinterpret story rules Examples ๐Ÿ‘‡๐Ÿป

10/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Does extra test time compute help? Mostly no. Increasing reasoning effort for o1 and o3-mini shows no improvements. Claude-3.7-Sonnet's extended thinking helps, but still underperforms models using <50% of the test time compute.

9/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Yet on FlawedFictionsLong (our benchmark with longer stories), even the best models barely outperform trivial baselines. And these stories are still under 4000 wordsโ€”far shorter than novels or screenplays where plot holes typically occur.

8/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We find that most open-weight models and proprietary LLMs like GPT-4o-mini, GPT-4o, and Claude-Haiku struggle on the task, often only slightly improving over trivial baselines. Advanced models like Claude-3.5-Sonnet and o1 fare better, approaching human performance.


7/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We tested various LLMs on FlawedFictions. For classification task we report accuracy and for localization task we define CEEval-Full (0-1) that measures if the models correctly localize the sentences with error and the sentences contradicted by the error.

6/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Using FlawedFictionsMaker + human verification, we created FlawedFictions - a benchmark for plot hole detection that tests: a) identifying if a story contains a plot hole, and b) localizing both the error and the contradicted fact in the text

5/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Diagram showing the"FlawedFictionsMaker" algorithm that introduces plot holes into stories. It has 5 steps labeled A through E: A: "Partition Original Story in Three Acts" - Shows three story snippets about Watson's injured left arm. B: "Extract Story Facts" - Lists facts including "Sherlock lives in Baker Street" and "Watson has a war wound on his left arm." C: "Select and Build Contradicting Fact" - Shows "What if Watson had a war wound on his left knee instead?" D: "Generate Counterfactual Story" - Shows the same three story snippets but with "knee" replacing "arm" in red text. E: "Rebuild Story, Creating a Plot Hole" - Shows the altered story with inconsistent mentions of both arm and knee injuries.

Diagram showing the"FlawedFictionsMaker" algorithm that introduces plot holes into stories. It has 5 steps labeled A through E: A: "Partition Original Story in Three Acts" - Shows three story snippets about Watson's injured left arm. B: "Extract Story Facts" - Lists facts including "Sherlock lives in Baker Street" and "Watson has a war wound on his left arm." C: "Select and Build Contradicting Fact" - Shows "What if Watson had a war wound on his left knee instead?" D: "Generate Counterfactual Story" - Shows the same three story snippets but with "knee" replacing "arm" in red text. E: "Rebuild Story, Creating a Plot Hole" - Shows the altered story with inconsistent mentions of both arm and knee injuries.

We introduce FlawedFictionsMaker an algorithm to controllably generate plot holes in stories by extracting facts from a story's first act and contradicting them later in the story.

E.g. If Watson has a left arm injury, we edit it to become a knee injury in later mentions.

4/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

It can also be interpreted as inference time world-modeling - inferring the rules of a story's world at test time and assessing if they're consistently followed throughout the narrative.

3/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Why study plot hole detection? It's a sophisticated reasoning problem requiring:
- Tracking states across long contexts
- Common sense & pragmatics for implicit details
- Theory of mind for character motivations/beliefs

2/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A screenshot of the first page of the paper, containing the paper title: Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection and the names of the authors: Kabir Ahuja, Melanie Sclar, and Yulia Tsvetkov. All the three authors are from CSE department in the University of Washington in Seattle, USA. They can be reached at {kahuja,msclar,yuliats}@cs.washington.edu

A screenshot of the first page of the paper, containing the paper title: Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection and the names of the authors: Kabir Ahuja, Melanie Sclar, and Yulia Tsvetkov. All the three authors are from CSE department in the University of Washington in Seattle, USA. They can be reached at {kahuja,msclar,yuliats}@cs.washington.edu

๐Ÿ“ข New Paper!

Tired ๐Ÿ˜ด of reasoning benchmarks full of math & code? In our work we consider the problem of reasoning for plot holes in stories -- inconsistencies in a storyline that break the internal logic or rules of a storyโ€™s world ๐ŸŒŽ

W @melaniesclar.bsky.social, and @tsvetshop.bsky.social

1/n

22.04.2025 18:50 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

31% of US adults use generative AI for healthcare ๐ŸคฏBut most AI systems answer questions assertivelyโ€”even when they donโ€™t have the necessary context. Introducing #MediQ a framework that enables LLMs to recognize uncertainty๐Ÿค”and ask the right questionsโ“when info is missing: ๐Ÿงต

06.12.2024 22:51 โ€” ๐Ÿ‘ 68    ๐Ÿ” 14    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Post image

Excited to release Tulu 3! We worked hard to try and make the best open post-training recipe we could, and the results are good!
I was lucky enough to work on almost every stage of the pipeline in one way or another. Some comments + highlights โฌ‡๏ธ

21.11.2024 17:45 โ€” ๐Ÿ‘ 9    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@kabirahuja2431 is following 20 prominent accounts