Ishika Agarwal's Avatar

Ishika Agarwal

@wonderingishika.bsky.social

CS PhD @ UIUC | Data Efficiency NLP | Conversational AI | agarwalishika.github.io | same handle on twitter

896 Followers  |  396 Following  |  22 Posts  |  Joined: 10.11.2024  |  1.9328

Latest posts by wonderingishika.bsky.social on Bluesky

6/6 For more details, see:

Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...

Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project πŸŽ‰πŸŽ‰

17.02.2025 04:06 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

5/6 Finally, using our influence values, we pick a small subset & fine-tune the model. In our evaluation, we use 4 SOTA influence functions -- NN-CIFT achieves the same performance while using a model 34,000x smaller!

17.02.2025 04:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

4/6 Second, we train the InfluenceNetwork using basic mini-batch gradient descent, then let it estimate the influence for the remaining data. It has a very low error of 0.067!

17.02.2025 04:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

3/6 First, the neural network (called the β€œInfluenceNetwork”) needs to be trained. We compute influence values using existing methods -- but only for a tiny fraction of data (just 0.25%-5%).

17.02.2025 04:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/6 Estimating the value of data is expensive.

Past works use LLMs to estimate the influence of data -- we use small neural networks to *learn to estimate* influence, instead. This reduces costs and adapts to new data without heavy recomputation.

Here’s how it works:

17.02.2025 04:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸš€Very excited about my new paper!

NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!

17.02.2025 04:06 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

Elated to announce that DELIFT has been accepted to ICLR'25 πŸŽ‰ Looking forward to discussing it in Singapore!

23.01.2025 15:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
ACL Fellows 2024 | ACL Member Portal

Congratulations to @dilekh.bsky.social for her ACL Fellowship! πŸŽ‰πŸŽ‰πŸŽ‰ www.aclweb.org/portal/conte...

11.12.2024 14:35 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Preview
β€ŽGemini - Challenges and Solutions for Aging Adults Created with Gemini

The last response from Gemini in this thread may shock you: gemini.google.com/share/6d141b...

24.11.2024 07:28 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Thank you Guneet! Would love to hear more about these stress tests :)

24.11.2024 06:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ‘‹

24.11.2024 00:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hey! Would love to be added :)

20.11.2024 23:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Can LLMs make us critical thinkers?

TreeInstruct reorients assistant-like LLMs to be instructors that guide students towards understanding their mistakes, without providing direct/indirect answers.

Check out aclanthology.org/2024.finding... (w/ @wonderingishika.bsky.social) to learn more!

19.11.2024 17:23 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

All around the theme of data-efficient NLP:

(1) using influence functions to improve language model performance from less data
(2) enabling language models to generate queries for things it doesn't know

17.11.2024 23:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

For more details, see:
Paper: arxiv.org/pdf/2411.04425
Code: github.com/agarwalishik...

Thank you so much to Krishnateja, Lucian, and Marina for their help, mentorship, and guidance during this project! πŸŽ‰πŸŽ‰

17.11.2024 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

3. Continual fine-tuning: given a fine-tuned model, enabling it to integrate new and complementary information while mitigating catastrophic forgetting. We find that reducing the dataset helps remove samples that hinder performance, surpassing the performance of the full dataset.

17.11.2024 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

2. Task-specific fine-tuning: given an instruction-tuned model, refining the LLM's expertise in specific domains. We find that pruning the dataset removes noise and keeps relevant examples, achieving better performance than fine-tuning on the full dataset.

17.11.2024 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

1. Instruction tuning: given a base model, fine-tuning a model to follow general instructions. We find that performance drops are minimal when reducing the dataset by 70%.

17.11.2024 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

DELIFT quantifies the information present in a sample wrt an LLM's capabilities. Using submodular functions, DELIFT can automatically adapt the chosen subset based on the objectives in the 3 stages of language model fine-tuning:

17.11.2024 19:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I'm so excited to share my latest paper called DELIFT along with Krishnateja Killamsetty, Lucian Popa, and Marina Danilevksy at IBM Research πŸŽ‰

We tackle expensive fine-tuning by selecting a small subset of informative data that targets a model's weaknesses.

17.11.2024 19:25 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 2    πŸ“Œ 1

TreeInstruct is preferred 78.43% of the time. It solves 14.09% more bugs across all settings, and our questions are 14.18% better at addressing bugs, maintaining relevance, and ensuring logical conversation flow. TreeInstruct also adapts to human students of varying backgrounds.

17.11.2024 19:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

TreeInstruct estimates the knowledge a student needs to debug their code and devises a conversation plan. It then dynamically constructs a question tree based on its interactions with the student, navigating the knowledge state space till the student comprehends & fixes all bugs.

17.11.2024 19:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - agarwalishika/TreeInstruct: TreeInstruct is a novel method that uses state space estimation and dynamic tree-based questioning for multi-turn Socratic instruction, applied to code debugging. TreeInstruct is a novel method that uses state space estimation and dynamic tree-based questioning for multi-turn Socratic instruction, applied to code debugging. - agarwalishika/TreeInstruct

github.com/agarwalishik...
We apply TreeInstruct to code debugging. Prior works directly give away bugs/fixes, assume single-turn conversations, and only work for one bug. We create a realistic, multi-bug dataset, where the bugs are mutually dependent.

17.11.2024 19:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Instruct, Not Assist: LLM-based Multi-Turn Planning and Hierarchical Questioning for Socratic Code Debugging Socratic questioning is an effective teaching strategy, encouraging critical thinking and problem-solving. The conversational capabilities of large language models (LLMs) show great potential for prov...

Can LLMs make us critical thinkers?

TreeInstruct reorients LLMs to be instructors that guide students socratically to solve problems, instead of assistants that provide direct answers.

Check out our EMNLP2024 paper at arxiv.org/abs/2406.11709 (w/ @pkargupta.bsky.social) to learn more!

17.11.2024 19:22 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'd love to be added - thank you!!

17.11.2024 16:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@wonderingishika is following 19 prominent accounts