Archiki Prasad's Avatar

Archiki Prasad

@archiki.bsky.social

Ph.D. Student at UNC NLP | Apple Scholar in AI/ML Ph.D. Fellowship | Prev: FAIR at Meta, AI2, Adobe (Intern) | Interests: #NLP, #ML | https://archiki.github.io/

621 Followers  |  844 Following  |  24 Posts  |  Joined: 21.11.2024  |  2.1387

Latest posts by archiki.bsky.social on Bluesky

UT Austin campus

UT Austin campus

Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! πŸŽ‰

05.05.2025 20:28 β€” πŸ‘ 43    πŸ” 9    πŸ’¬ 5    πŸ“Œ 2

🌡 I'm going to be presenting PBT at #NAACL2025 today at 2PM! Come by poster session 2 if you want to hear about:
-- balancing positive and negative persuasion
-- improving LLM teamwork/debate
-- training models on simulated dialogues

With @mohitbansal.bsky.social and @peterbhase.bsky.social

30.04.2025 15:04 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

✈️ Heading to #NAACL2025 to present 3 main conf. papers, covering training LLMs to balance accepting and rejecting persuasion, multi-agent refinement for more faithful generation, and adaptively addressing varying knowledge conflict.

Reach out if you want to chat!

29.04.2025 17:52 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

Check out 🚨CAPTURe🚨 -- a new benchmark testing spatial reasoning by making VLMs count objects under occlusion.

SOTA VLMs (GPT-4o, Qwen2-VL, Intern-VL2) have high error rates on CAPTURe (but humans have low error βœ…) and models struggle to reason about occluded objects.

arxiv.org/abs/2504.15485

πŸ§΅πŸ‘‡

24.04.2025 15:14 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

In Singapore for #ICLR2025 this week to present papers + keynotes πŸ‘‡, and looking forward to seeing everyone -- happy to chat about research, or faculty+postdoc+phd positions, or simply hanging out (feel free to ping)! πŸ™‚

Also meet our awesome students/postdocs/collaborators presenting their work.

21.04.2025 16:49 β€” πŸ‘ 19    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Preview
Retrieval-Augmented Generation with Conflicting Evidence Large language model (LLM) agents are increasingly employing retrieval-augmented generation (RAG) to improve the factuality of their responses. However, in practice, these systems often need to handle...

Thanks to my coauthors: @hwang98.bsky.social @esteng.bsky.social @mohitbansal.bsky.social
for the fun collaboration!

@unccs.bsky.social

Paper: arxiv.org/abs/2504.13079
Data and Code: github.com/HanNight/RAM...

18.04.2025 17:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Can RAG systems handle imbalanced evidence or increasing misinformation?

➑️ As document support becomes imbalanced, baselines ignore under-supported correct answers but MADAM-RAG maintains stable performance

➑️ As misinformation πŸ“ˆ, baselines degrade sharply (βˆ’46%) but MADAM-RAG remains more robust

18.04.2025 17:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

How important are multi-round debate and aggregation in MADAM-RAG?

Increasing debate rounds in MADAM-RAG improves performance by allowing agents to refine answers via debate.

Aggregator provides even greater gains, especially in early rounds, aligning conflicting views & suppressing misinfo.

18.04.2025 17:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We evaluate on 3 datasets: FaithEval (suppression of misinformation), AmbigDocs (disambiguation across sources), RAMDocs (our dataset w/ different types of conflict).

MADAM-RAG consistently outperforms concatenated-prompt and Astute RAG baselines across all three datasets and model backbones.

18.04.2025 17:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We propose MADAM-RAG, a structured, multi-agent framework designed to handle inter-doc conflicts, misinformation, & noise in retrieved content, comprising:

1️⃣ Independent LLM agents - generate intermediate response conditioned on a single doc
2️⃣ Centralized aggregator
3️⃣ Iterative multi-round debate

18.04.2025 17:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“‚RAMDocs is designed to reflect the complexities of real-world retrieval. It includes:

➑️ Ambiguous queries w/ multiple valid ans.
➑️ Imbalanced document support (some answers backed by many sources, others by fewer)
➑️ Docs w/ misinformation (plausible but wrong claims) or noisy/irrelevant content

18.04.2025 17:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨Real-world retrieval is messy: queries are ambiguous or docs conflict & have incorrect/irrelevant info. How can we jointly address these problems?

➑️RAMDocs: challenging dataset w/ ambiguity, misinformation & noise
➑️MADAM-RAG: multi-agent framework, debates & aggregates evidence across sources

πŸ§΅β¬‡οΈ

18.04.2025 17:05 β€” πŸ‘ 14    πŸ” 7    πŸ’¬ 3    πŸ“Œ 0
Post image

What if we could transform advanced math problems into abstract programs that can generate endless, verifiable problem variants?

Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
πŸ§΅πŸ‘‡

15.04.2025 19:37 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Post image

🚨Announcing TaCQ 🚨 a new mixed-precision quantization method that identifies critical weights to preserve. We integrate key ideas from circuit discovery, model editing, and input attribution to improve low-bit quant., w/ 96% 16-bit acc. at 3.1 avg bits (~6x compression)

πŸ“ƒ arxiv.org/abs/2504.07389

12.04.2025 14:19 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1

πŸŽ‰ A big congratulations to @archiki.bsky.social (advised by Prof. @mohitbansal.bsky.social) for the being awarded the 2025 Apple Scholars in AI/ML PhD Fellowship!", we are proud of you! πŸ‘

27.03.2025 19:36 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Thanks Elias!!

27.03.2025 20:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks Jaemin, learned so much from you as well!

27.03.2025 20:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸŽ‰πŸŽ‰ Big congrats to @archiki.bsky.social on being awarded the @Apple AI/ML PhD Fellowship, for her extensive contributions in evaluating+improving reasoning in language/reward models and their applications to new domains (ReCEval, RepARe, System-1.x, ADaPT, ReGAL, ScPO, UTGen, GrIPS)! #ProudAdvisor

27.03.2025 19:41 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ₯³πŸ₯³ Honored and grateful to be awarded the 2025 Apple Scholars in AI/ML PhD Fellowship! ✨

Huge shoutout to my advisor @mohitbansal.bsky.social, & many thanks to my lab mates @unccs.bsky.social , past collaborators + internship advisors for their support β˜ΊοΈπŸ™

machinelearning.apple.com/updates/appl...

27.03.2025 19:25 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 1    πŸ“Œ 3
Post image

Introducing VEGGIE πŸ₯¦β€”a unified, end-to-end, and versatile instructional video generative model.

VEGGIE supports 8 skills, from object addition/removal/changing, and stylization to concept grounding/reasoning. It exceeds SoTA and shows 0-shot multimodal instructional & in-context video editing.

19.03.2025 18:56 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

🚨 Check out "UTGen & UTDebug" for learning to automatically generate unit tests (i.e., discovering inputs which break your code) and then applying them to debug code with LLMs, with strong gains (>12% pass@1) across multiple models/datasets! (see details in πŸ§΅πŸ‘‡)

1/4

05.02.2025 18:53 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Preview
Learning to Generate Unit Tests for Automated Debugging Unit tests (UTs) play an instrumental role in assessing code correctness as well as providing feedback to a large language model (LLM) as it iteratively debugs faulty code, motivating automated test g...

Thanks to my amazing co-authors @esteng.bsky.social (co-lead), @cyjustinchen.bsky.social, @codezakh.bsky.social, @mohitbansal.bsky.social

@unccs.bsky.social

Paper: arxiv.org/abs/2502.01619
Code+Datasets: github.com/archiki/UTGe...

04.02.2025 19:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Lastly, we show that both test-time scaling and backtracking are crucial for UTDebug, and scaling the number of generated UTs also consistently improves code accuracy.

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Combining UTGen with UTDebug 🀝 we consistently outperform no UT feedback, randomly sampling UTs, and prompting targeted UTs across 3 models & datasets.

For partially correct code with subtle errors (our MBPP+Fix hard split) debugging with UTGen improves over baselines by >12.35% on Qwen 2.5!

04.02.2025 19:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

RQ3: We also propose ✨UTDebug ✨ with two key modifications:

1⃣Test-time scaling (self-consistency over multiple samples) for increasing output acc.

2⃣Validation & Backtracking: Generating multiple UTs to perform validation, accept edits only when the overall pass rate increases & backtrack otherwise

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We verify the tradeoff for 0-shot LLMs: when using UTs generated w/o conditioning on the code, the outputs acc is high -- but these UTs are not effective at revealing errors (especially for MBPP+Fix Hard). When prompted to generate failing UTs, we get high attack rate but less accurate UT outputs.

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

On three metrics: attack rate, output acc, and acc + attack (measuring both) we benchmark several open-source 7-8B LLMs.

We find that UTGen models balance output acc and attack rate and result in 7.59% more failing/error-revealing unit tests with correct outputs on Qwen-2.5.

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

✨UTGen ✨ bootstraps training data from code generation datasets to train unit test generators.

Given coding problems and their solutions, we 1⃣ perturb the code to simulate errors, 2⃣ find challenging UT inputs, 3⃣ generate CoT rationales deducing the correct UT output for challenging UT inputs.

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We find zero-shot LLMs exhibit a tradeoff between output accuracy and attack rate: UTs whose outputs can be predicted correctly are too easy (not failing) and it's hard to predict outputs of challenging inputs.

So how do we do both, i.e., have a high attack rate and output acc (RQ2)?

A: ✨UTGen ✨

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

RQ1: What are desirable properties of UT generators?

1⃣ The generator should correctly predict the output for a UT input to a task (output acc πŸ“ˆ)

2⃣ It should uncover errors by generating failing UT inputs, i.e., given incorrect code generate challenging inputs that raise errors (attack rate πŸ“ˆ)

04.02.2025 19:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@archiki is following 20 prominent accounts