Apoorv Khandelwal's Avatar

Apoorv Khandelwal

@apoorvkh.com.bsky.social

cs phd student at brown https://apoorvkh.com

819 Followers  |  198 Following  |  16 Posts  |  Joined: 13.08.2024  |  1.9859

Latest posts by apoorvkh.com on Bluesky

Preview
What is an β€œAbstract Reasoner”? Revisiting Experiments and Arguments about Large Language Models Tian Yun, Chen Sun, Ellie Pavlick. Proceedings of the 29th Conference on Computational Natural Language Learning. 2025.

aclanthology.org/2025.conll-1...

28.07.2025 05:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Will be at ACL this week! #ACL2025 #ACL2025NLP

Presenting Tian Yun’s paper on abstract reasoners at CoNLL on Thursday.

I’ve been investigating how LLMs internally compose functions lately. Happy to chat about that (among other things) and hang out in Vienna!

28.07.2025 05:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Curious how many papers were assigned to reviewers on average! Review quality seems better than average from my small sample size. Wondering if that correlates with a lower reviewer load? E.g. I only received 2 papers to review.

29.05.2025 18:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Scores of R1, Flash-thinking, Claude 4.7, QwQ, o1-pro, o3-mini on USAMO 2025. Scores less than 5% of max score.

Scores of R1, Flash-thinking, Claude 4.7, QwQ, o1-pro, o3-mini on USAMO 2025. Scores less than 5% of max score.

Tests on USAMO immediately after problems were posted yield surprisingly bad model performance. Suggests there's much more training on test than expected.
arxiv.org/abs/2503.219...

31.03.2025 19:08 β€” πŸ‘ 29    πŸ” 8    πŸ’¬ 7    πŸ“Œ 0

Just read that AI’s energy consumption in data centers is nothing to be worried about because most of the hyperscale datacenters running AI are "powered by renewable energy or low-carbon nuclear power."

Let's debunk that, shall we?

19.03.2025 19:24 β€” πŸ‘ 30    πŸ” 12    πŸ’¬ 2    πŸ“Œ 1
New England NLP Meeting Series

If you're in the northeastern US and you're submitting a paper to COLM on March 27, you should absolutely be sending its abstract to New England NLP on March 28.

19.03.2025 19:59 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

+ No system pre-reqs, multi-stage PyTorch workflows in one script, CLI integrations, catching system failures as exceptions, SLURM support, better logging, and so much more!

Additional fine-tuning examples in our docs with:
@pytorch.org, Deepspeed, @lightningai.bsky.social, HF Accelerate

11.03.2025 16:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

A cool side-effect: fine-tune any LLM (from
@huggingface
transformers) on any text dataset *with multiple nodes* in just *one command*.

torchrun.xyz/examples/tra...

11.03.2025 16:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It's a replacement for CLI tools, like "torchrun".

Most basic usage: specify some (SSH-enabled) machines you want to parallelize your code on. Then launch a function onto that configuration.

All from inside your Python script!

11.03.2025 16:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - apoorvkh/torchrunx: Easily run PyTorch on multiple GPUs & machines Easily run PyTorch on multiple GPUs & machines. Contribute to apoorvkh/torchrunx development by creating an account on GitHub.

We made a library (torchrunx) to make multi-GPU / multi-node PyTorch easier, more robust, and more modular! 🧡

github.com/apoorvkh/tor...
Docs: torchrun.xyz

`(uv) pip install torchrunx` today!

(w/ the very talented, Peter Curtin, Brown CS '25)

11.03.2025 16:54 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Paper: A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

Paper: A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

✨How does the depth of a transformer affect its reasoning capabilities? New preprint by myself and @Ashish_S_AI shows that a little depth goes a long way to increase transformers’ expressive power

We take this as encouraging for further research on looped transformers!🧡

07.03.2025 16:46 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

(1/9) Excited to share my recent work on "Alignment reduces LM's conceptual diversity" with @tomerullman.bsky.social and @jennhu.bsky.social, to appear at #NAACL2025! 🐟

We want models that match our values...but could this hurt their diversity of thought?
Preprint: arxiv.org/abs/2411.04427

10.02.2025 17:20 β€” πŸ‘ 63    πŸ” 10    πŸ’¬ 3    πŸ“Œ 4
Managing Project Dependencies

I started a blog! First post is everything I know about setting up (fast, reproducible, error-proof) Python project environments using the latest tools. These methods have saved me a lot of grief. Also a short guide to CUDA in appendix :)

blog.apoorvkh.com/posts/projec...

07.02.2025 15:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

I think typing my code and using a linter (ruff) + static type checker (pyright) saves me a lot of grief.

25.01.2025 18:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Can GANs compete in 2025? In 'The GAN is dead; long live the GAN! A Modern GAN Baseline', we show that a minimalist GAN w/o any tricks can match the performance of EDM with half the size and one-step generation - github.com/brownvc/r3gan - work of Nick Huang, @skylion.bsky.social, Volodymyr Kuleshov

10.01.2025 19:08 β€” πŸ‘ 69    πŸ” 14    πŸ’¬ 3    πŸ“Œ 1
Preview
Simons Institute The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science. Established on July 1, 2012, the Institute is housed in Calvin...

A couple sources for academic talks that I really like!

Cohere For AI (www.youtube.com/playlist?lis...)

Simons Institute (www.youtube.com/@SimonsInsti...)

10.01.2025 20:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Let he who hath not \usepackage[subtle]{savetrees}

18.12.2024 01:27 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Slides from the tutorial are now posted here!

neurips.cc/media/neurip...

11.12.2024 16:43 β€” πŸ‘ 17    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Post image

β€œThey said it could not be done”. We’re releasing Pleias 1.0, the first suite of models trained on open data (either permissibly licensed or uncopyrighted): Pleias-3b, Pleias-1b and Pleias-350m, all based on the two trillion tokens set from Common Corpus.

05.12.2024 16:39 β€” πŸ‘ 251    πŸ” 85    πŸ’¬ 12    πŸ“Œ 19

I am an ex-Paperpile user and am liking Zotero lately! Free storage from the university helps.

27.11.2024 05:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - benlipkin/decoding: Composable inference algorithms with LLMs and programmable logic Composable inference algorithms with LLMs and programmable logic - benlipkin/decoding

Lots of folks talking about scaling LLM inference over this last year

Internally, I’ve been developing and using a library that makes this extremely easy, and I decided to open-source it
Meet the decoding library: github.com/benlipkin/de...

1/7

25.11.2024 16:19 β€” πŸ‘ 26    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - McGill-NLP/llm2vec: Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders' Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders' - McGill-NLP/llm2vec

β€œTurn” a decoder into an encoder with LLM2Vec (github.com/McGill-NLP/l...). Seen at COLM 2024 :)

If you want the naive, training-free / model-agnostic approach: their related work section says it is most common to using the final token’s last hidden state.

26.11.2024 01:37 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Okay genius idea to improve quality of #nlp #arr reviews. Literally give gold stars to the best reviewers, visible on open review next to your anonymously ID during review process.

Here’s why it would work, and why would you should RT this fab idea:

24.11.2024 21:01 β€” πŸ‘ 27    πŸ” 5    πŸ’¬ 3    πŸ“Œ 1

Thanks and great! Hope you are likewise doing well!

21.11.2024 21:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Would be great to join, thanks!

21.11.2024 21:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Excited to release Tulu 3! We worked hard to try and make the best open post-training recipe we could, and the results are good!
I was lucky enough to work on almost every stage of the pipeline in one way or another. Some comments + highlights ⬇️

21.11.2024 17:45 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

You can find the β€œauthors’ cut” at: arxiv.org/abs/2410.23261

21.11.2024 16:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI’s computing gap: academics lack access to powerful chips needed for research Survey highlights disparity between academic and industry scientists’ access to computing power needed to train machine-learning models.

Nature wrote a nice article about our work!

www.nature.com/articles/d41...

21.11.2024 16:23 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@apoorvkh.com is following 20 prominent accounts