Hailey Joren's Avatar

Hailey Joren

@haileyjoren.bsky.social

PhD Student @ UC San Diego Researching reliable, interpretable, and human-aligned ML/AI

462 Followers  |  77 Following  |  8 Posts  |  Joined: 21.11.2024  |  1.728

Latest posts by haileyjoren.bsky.social on Bluesky

Post image

Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse

24.04.2025 06:19 β€” πŸ‘ 16    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1
Preview
Sufficient Context: A New Lens on Retrieval Augmented Generation Systems Augmenting LLMs with context leads to improved performance across many applications. Despite much research on Retrieval Augmented Generation (RAG) systems, an open question is whether errors arise bec...

I couldn't make it to ICLR this year but co-author @cyroid.bsky.social will be around to chat!
πŸ“„ Paper (ICLR ’25): arxiv.org/abs/2411.06037
πŸ’» Key Findings & Prompts: github.com/hljoren/suff...
#RAG #ICLR2025

24.04.2025 18:18 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

Our work suggests that solving RAG hallucination problems requires moving beyond just improving retrievalβ€”we need models that can accurately determine when retrieved information suffices for answering and abstain when appropriate confidence thresholds aren't met.

24.04.2025 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Line graph comparing selective generation methods showing coverage vs. accuracy trade-offs. Purple lines (sufficient context + confidence) outperform gray lines (confidence only), especially for HotpotQA dataset and Gemini model.

Line graph comparing selective generation methods showing coverage vs. accuracy trade-offs. Purple lines (sufficient context + confidence) outperform gray lines (confidence only), especially for HotpotQA dataset and Gemini model.

Diagram of the Selective Generation Pipeline. The workflow shows how Input Query and Input Context feed into both Self-reported model confidence (gray box) and Sufficient Context AutoRater label (purple box). These signals combine in a Logistic regression model, which produces a score. This score is compared against a Threshold determined by Desired coverage. Depending on the comparison, the system either proceeds with the Model Response (green box) or chooses to Abstain (blue box).

Diagram of the Selective Generation Pipeline. The workflow shows how Input Query and Input Context feed into both Self-reported model confidence (gray box) and Sufficient Context AutoRater label (purple box). These signals combine in a Logistic regression model, which produces a score. This score is compared against a Threshold determined by Desired coverage. Depending on the comparison, the system either proceeds with the Model Response (green box) or chooses to Abstain (blue box).

Building on these insights, we developed a selective generation framework using both sufficient context signals and model confidence to decide when to respond vs. abstainβ€”improving accuracy of responses by 2-10% for Gemini, GPT, and Gemma.

24.04.2025 18:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Table categorizing cases where models correctly answer questions despite insufficient context, including yes/no questions, limited choice questions, multi-hop fragments, partial information, and cases where parametric knowledge bridges gaps.

Table categorizing cases where models correctly answer questions despite insufficient context, including yes/no questions, limited choice questions, multi-hop fragments, partial information, and cases where parametric knowledge bridges gaps.

Intriguingly, models sometimes generate correct answers despite insufficient context. We taxonomize these cases: parametric knowledge bridging information gaps, yes/no questions with 50% chance of correctness, and instances where the context provides partial reasoning paths.

24.04.2025 18:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Bar graph showing percentage of instances with sufficient context across datasets. FreshQA has highest sufficient context (77%), while HotpotQA and Musique have around 44-45% sufficient context.

Bar graph showing percentage of instances with sufficient context across datasets. FreshQA has highest sufficient context (77%), while HotpotQA and Musique have around 44-45% sufficient context.

We analyzed standard QA datasets through our sufficient context lens and found a surprising percentage lack sufficient information: ~56% for Musique, ~56% for HotpotQA, and ~23% for FreshQA. This highlights the magnitude of the information retrieval challenge.

24.04.2025 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Conversely, smaller models (Mistral 3, Gemma 2) struggle even with sufficient contextβ€”either hallucinating or failing to extract answers from the provided information. Neither approach solves the fundamental RAG reliability challenge.

24.04.2025 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Bar chart comparing model performance on datasets stratified by sufficient context. Graph shows that larger models (Gemini, GPT, Claude) perform better with sufficient context but still hallucinate with insufficient context, while smaller models (Gemma) struggle across conditions.

Bar chart comparing model performance on datasets stratified by sufficient context. Graph shows that larger models (Gemini, GPT, Claude) perform better with sufficient context but still hallucinate with insufficient context, while smaller models (Gemma) struggle across conditions.

A major finding: When context is sufficient, larger models (Gemini 1.5 Pro, GPT-4o, Claude 3.5) excel. But when it's insufficient, they're more likely to hallucinate than abstainβ€”presenting incorrect answers with high confidence.

24.04.2025 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social

24.04.2025 18:18 β€” πŸ‘ 11    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

@haileyjoren is following 20 prominent accounts