6/6 For more details, see:
Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...
Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project ππ
@wonderingishika.bsky.social
CS PhD @ UIUC | Data Efficiency NLP | Conversational AI | agarwalishika.github.io | same handle on twitter
6/6 For more details, see:
Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...
Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project ππ
5/6 Finally, using our influence values, we pick a small subset & fine-tune the model. In our evaluation, we use 4 SOTA influence functions -- NN-CIFT achieves the same performance while using a model 34,000x smaller!
17.02.2025 04:06 β π 2 π 0 π¬ 1 π 04/6 Second, we train the InfluenceNetwork using basic mini-batch gradient descent, then let it estimate the influence for the remaining data. It has a very low error of 0.067!
17.02.2025 04:06 β π 2 π 0 π¬ 1 π 03/6 First, the neural network (called the βInfluenceNetworkβ) needs to be trained. We compute influence values using existing methods -- but only for a tiny fraction of data (just 0.25%-5%).
17.02.2025 04:06 β π 0 π 0 π¬ 1 π 02/6 Estimating the value of data is expensive.
Past works use LLMs to estimate the influence of data -- we use small neural networks to *learn to estimate* influence, instead. This reduces costs and adapts to new data without heavy recomputation.
Hereβs how it works:
πVery excited about my new paper!
NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!
Elated to announce that DELIFT has been accepted to ICLR'25 π Looking forward to discussing it in Singapore!
23.01.2025 15:46 β π 3 π 0 π¬ 0 π 0Congratulations to @dilekh.bsky.social for her ACL Fellowship! πππ www.aclweb.org/portal/conte...
11.12.2024 14:35 β π 11 π 2 π¬ 0 π 1The last response from Gemini in this thread may shock you: gemini.google.com/share/6d141b...
24.11.2024 07:28 β π 6 π 1 π¬ 0 π 0Thank you Guneet! Would love to hear more about these stress tests :)
24.11.2024 06:26 β π 2 π 0 π¬ 0 π 0π
24.11.2024 00:23 β π 2 π 0 π¬ 0 π 0Hey! Would love to be added :)
20.11.2024 23:44 β π 0 π 0 π¬ 1 π 0Can LLMs make us critical thinkers?
TreeInstruct reorients assistant-like LLMs to be instructors that guide students towards understanding their mistakes, without providing direct/indirect answers.
Check out aclanthology.org/2024.finding... (w/ @wonderingishika.bsky.social) to learn more!
All around the theme of data-efficient NLP:
(1) using influence functions to improve language model performance from less data
(2) enabling language models to generate queries for things it doesn't know
For more details, see:
Paper: arxiv.org/pdf/2411.04425
Code: github.com/agarwalishik...
Thank you so much to Krishnateja, Lucian, and Marina for their help, mentorship, and guidance during this project! ππ
3. Continual fine-tuning: given a fine-tuned model, enabling it to integrate new and complementary information while mitigating catastrophic forgetting. We find that reducing the dataset helps remove samples that hinder performance, surpassing the performance of the full dataset.
17.11.2024 19:25 β π 1 π 0 π¬ 1 π 02. Task-specific fine-tuning: given an instruction-tuned model, refining the LLM's expertise in specific domains. We find that pruning the dataset removes noise and keeps relevant examples, achieving better performance than fine-tuning on the full dataset.
17.11.2024 19:25 β π 1 π 0 π¬ 1 π 01. Instruction tuning: given a base model, fine-tuning a model to follow general instructions. We find that performance drops are minimal when reducing the dataset by 70%.
17.11.2024 19:25 β π 0 π 0 π¬ 1 π 0DELIFT quantifies the information present in a sample wrt an LLM's capabilities. Using submodular functions, DELIFT can automatically adapt the chosen subset based on the objectives in the 3 stages of language model fine-tuning:
17.11.2024 19:25 β π 0 π 0 π¬ 1 π 0I'm so excited to share my latest paper called DELIFT along with Krishnateja Killamsetty, Lucian Popa, and Marina Danilevksy at IBM Research π
We tackle expensive fine-tuning by selecting a small subset of informative data that targets a model's weaknesses.
TreeInstruct is preferred 78.43% of the time. It solves 14.09% more bugs across all settings, and our questions are 14.18% better at addressing bugs, maintaining relevance, and ensuring logical conversation flow. TreeInstruct also adapts to human students of varying backgrounds.
17.11.2024 19:22 β π 0 π 0 π¬ 0 π 0TreeInstruct estimates the knowledge a student needs to debug their code and devises a conversation plan. It then dynamically constructs a question tree based on its interactions with the student, navigating the knowledge state space till the student comprehends & fixes all bugs.
17.11.2024 19:22 β π 0 π 0 π¬ 1 π 0github.com/agarwalishik...
We apply TreeInstruct to code debugging. Prior works directly give away bugs/fixes, assume single-turn conversations, and only work for one bug. We create a realistic, multi-bug dataset, where the bugs are mutually dependent.
Can LLMs make us critical thinkers?
TreeInstruct reorients LLMs to be instructors that guide students socratically to solve problems, instead of assistants that provide direct answers.
Check out our EMNLP2024 paper at arxiv.org/abs/2406.11709 (w/ @pkargupta.bsky.social) to learn more!
I'd love to be added - thank you!!
17.11.2024 16:45 β π 1 π 0 π¬ 0 π 0