Zaid Khan's Avatar

Zaid Khan

@codezakh.bsky.social

PhD student @ UNC NLP with @mohitbansal working on grounded reasoning + code generation | currently interning at Ai2 (PRIOR) | formerly NEC Laboratories America | BS + MS @ Northeastern zaidkhan.me

271 Followers  |  527 Following  |  9 Posts  |  Joined: 21.11.2024  |  2.3755

Latest posts by codezakh.bsky.social on Bluesky

πŸ”₯ Huge CONGRATS to Jaemin + @jhucompsci.bsky.social! πŸŽ‰

Very proud of his journey as an amazing researcher (covering groundbreaking, foundational research on important aspects of multimodality+other areas) & as an awesome, selfless mentor/teamplayer πŸ’™
-- Apply to his group & grab him for gap year!

20.05.2025 18:18 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Some personal updates:
- I've completed my PhD at @unccs.bsky.social! πŸŽ“
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor πŸ’™
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! πŸ”Ž

20.05.2025 17:58 β€” πŸ‘ 26    πŸ” 5    πŸ’¬ 3    πŸ“Œ 2
Post image

🚨 Introducing our @tmlrorg.bsky.social paper β€œUnlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation”
We present UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models, where both images and text may encode sensitive or private information.

07.05.2025 18:54 β€” πŸ‘ 10    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

πŸ”₯ BIG CONGRATS to Elias (and UT Austin)! Really proud of you -- it has been a complete pleasure to work with Elias and see him grow into a strong PI on *all* axes πŸ€—

Make sure to apply for your PhD with him -- he is an amazing advisor and person! πŸ’™

05.05.2025 22:00 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
UT Austin campus

UT Austin campus

Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! πŸŽ‰

05.05.2025 20:28 β€” πŸ‘ 43    πŸ” 9    πŸ’¬ 5    πŸ“Œ 2

✈️ Heading to #NAACL2025 to present 3 main conf. papers, covering training LLMs to balance accepting and rejecting persuasion, multi-agent refinement for more faithful generation, and adaptively addressing varying knowledge conflict.

Reach out if you want to chat!

29.04.2025 17:52 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

Check out 🚨CAPTURe🚨 -- a new benchmark testing spatial reasoning by making VLMs count objects under occlusion.

SOTA VLMs (GPT-4o, Qwen2-VL, Intern-VL2) have high error rates on CAPTURe (but humans have low error βœ…) and models struggle to reason about occluded objects.

arxiv.org/abs/2504.15485

πŸ§΅πŸ‘‡

24.04.2025 15:14 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

In Singapore for #ICLR2025 this week to present papers + keynotes πŸ‘‡, and looking forward to seeing everyone -- happy to chat about research, or faculty+postdoc+phd positions, or simply hanging out (feel free to ping)! πŸ™‚

Also meet our awesome students/postdocs/collaborators presenting their work.

21.04.2025 16:49 β€” πŸ‘ 19    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Post image

🚨Real-world retrieval is messy: queries are ambiguous or docs conflict & have incorrect/irrelevant info. How can we jointly address these problems?

➑️RAMDocs: challenging dataset w/ ambiguity, misinformation & noise
➑️MADAM-RAG: multi-agent framework, debates & aggregates evidence across sources

πŸ§΅β¬‡οΈ

18.04.2025 17:05 β€” πŸ‘ 14    πŸ” 7    πŸ’¬ 3    πŸ“Œ 0
Preview
Executable Functional Abstractions: Inferring Generative Programs for Advanced Math Problems Scientists often infer abstract procedures from specific instances of problems and use the abstractions to generate new, related instances. For example, programs encoding the formal rules and properti...

It was a fun collaboration with @esteng.bsky.social @archiki.bsky.social @jmincho.bsky.social @mohitbansal.bsky.social! πŸ₯³

Paper: arxiv.org/abs/2504.09763
Project Page: zaidkhan.me/EFAGen
Datasets + Models: huggingface.co/collections/...
HF Paper: huggingface.co/papers/2504....

15.04.2025 19:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

EFAs can be used for adversarial search to find harder problem variants. This has some interesting potential uses, such as finding fresh problems for online RL or identifying gaps / inconsistencies in a model’s reasoning ability. We can find variants of even Level 1 problems (GPT-4o) solves wrong.

15.04.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

EFAGen can infer EFAs for diverse sources of math data.

We demonstrate this by inferring EFAs on the NuminaMath dataset, which includes problems ranging from grade school to olympiad level problems. EFAGen can successfully infer EFAs for all math sources in NuminaMath, even olympiad-level problems.

15.04.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

EFAs are effective at augmenting training data.

Getting high-quality math data is expensive. EFAGen offers a way to improve upon existing math training data by generating problem variants through EFAs. EFA-based augmentation leads to consistent improvements across all evaluation metrics.

15.04.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

LMs can self-improve at inferring EFAs with execution feedback!

We self-train Llama-3.1-8B-Instruct with rejection finetuning using our derived unit tests as a verifiable reward signal and see substantial improvements in the model’s ability to infer EFAs, especially on harder problems.

15.04.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Key InsightπŸ’‘: We formalize properties any valid EFA must possess as unit tests and treat EFA inference as a program synthesis task that we can apply test-time search to.

15.04.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

➑️ EFAGen can generate data to augment static math datasets
➑️ EFAGen can infer EFAs for diverse + difficult math problems
➑️ Use EFAs to find + generate harder variants of existing math problems
➑️ LLMs can self-improve at writing EFAs

15.04.2025 19:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What if we could transform advanced math problems into abstract programs that can generate endless, verifiable problem variants?

Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
πŸ§΅πŸ‘‡

15.04.2025 19:37 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Post image

πŸ₯³πŸ₯³ Honored and grateful to be awarded the 2025 Apple Scholars in AI/ML PhD Fellowship! ✨

Huge shoutout to my advisor @mohitbansal.bsky.social, & many thanks to my lab mates @unccs.bsky.social , past collaborators + internship advisors for their support β˜ΊοΈπŸ™

machinelearning.apple.com/updates/appl...

27.03.2025 19:25 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 1    πŸ“Œ 3
Post image

🚨 Introducing UPCORE, to balance deleting info from LLMs with keeping their other capabilities intact.

UPCORE selects a coreset of forget data, leading to a better trade-off across 2 datasets and 3 unlearning methods.

πŸ§΅πŸ‘‡

25.02.2025 02:23 β€” πŸ‘ 12    πŸ” 5    πŸ’¬ 2    πŸ“Œ 1

🚨 Check out "UTGen & UTDebug" for learning to automatically generate unit tests (i.e., discovering inputs which break your code) and then applying them to debug code with LLMs, with strong gains (>12% pass@1) across multiple models/datasets! (see details in πŸ§΅πŸ‘‡)

1/4

05.02.2025 18:53 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

🚨 Excited to announce UTGen and UTDebug, where we first learn to generate unit tests and then apply them to debugging generated code with LLMs, with strong gains (+12% pass@1) on LLM-based debugging across multiple models/datasets via inf.-time scaling and cross-validation+backtracking!

πŸ§΅πŸ‘‡

04.02.2025 19:13 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Post image

🚨 Excited to share: "Learning to Generate Unit Tests for Automated Debugging" 🚨
which introduces ✨UTGen and UTDebug✨ for teaching LLMs to generate unit tests (UTs) and debugging code from generated tests.

UTGen+UTDebug yields large gains in debugging (+12% pass@1) & addresses 3 key questions:

πŸ§΅πŸ‘‡

04.02.2025 19:09 β€” πŸ‘ 18    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2

-- positional bias of faithfulness for long-form summarization
-- improving generation faithfulness via multi-agent collaboration

(PS. Also a big thanks to ACs+reviewers for their effort!)

27.01.2025 21:38 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

-- safe T2I/T2V gener
-- generative infinite games
-- procedural+predictive video repres learning
-- bootstrapping VLN via self-refining data flywheel
-- automated preference data synthesis
-- diagnosing cultural bias of VLMs
-- adaptive decoding to balance contextual+parametric knowl conflicts
🧡

27.01.2025 21:38 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

-- adapting diverse ctrls to any diffusion model
-- balancing fast+slow sys-1.x planning
-- balancing agents' persuasion resistance+acceptance
-- multimodal compositional+modular video reasoning
-- reverse thinking for stronger LLM reasoning
-- lifelong multimodal instruc tuning via dyn data selec
🧡

27.01.2025 21:38 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸŽ‰ Congrats to the awesome students, postdocs, & collaborators for this exciting batch of #ICLR2025 and #NAACL2025 accepted papers (FYI some are on the academic/industry job market and a great catch πŸ™‚), on diverse, important topics such as:

-- adaptive data generation environments/policies
...
🧡

27.01.2025 21:38 β€” πŸ‘ 18    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸŽ‰Very excited that our work on Persuasion-Balanced Training has been accepted to #NAACL2025! We introduce a multi-agent tree-based method for teaching models to balance:

1️⃣ Accepting persuasion when it helps
2️⃣ Resisting persuasion when it hurts (e.g. misinformation)

arxiv.org/abs/2410.14596
🧡 1/4

23.01.2025 16:50 β€” πŸ‘ 21    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

Thanks @AAAI for selecting me as a #AAAI Fellow! Very humbled+excited to be a part of the respected cohort of this+past years' fellows (& congrats everyone)! πŸ™

100% credit goes to my amazing past/current students+postdocs+collab for their work (& thanks to mentors+family)!πŸ’™
aaai.org/about-aaai/a...

21.01.2025 19:08 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 0    πŸ“Œ 3
Post image

πŸŽ‰Congratulations to Prof. @mohitbansal.bsky.social on being named a 2025 @RealAAAI Fellow for "significant contributions to multimodal AI foundations & faithful language generation and summarization." πŸ‘

16 Fellows chosen worldwide by cmte. of 9 past fellows & ex-president: aaai.org/about-aaai/a...

21.01.2025 15:56 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 0    πŸ“Œ 1

Deeply honored & humbled to have received the Presidential #PECASE Award by the @WhiteHouse and @POTUS office! πŸ™

Most importantly, very grateful to my amazing mentors, students, postdocs, collaborators, and friends+family for making this possible, and for making the journey worthwhile + beautiful πŸ’™

15.01.2025 16:45 β€” πŸ‘ 43    πŸ” 8    πŸ’¬ 5    πŸ“Œ 1

@codezakh is following 20 prominent accounts