Kanishk Gandhi's Avatar

Kanishk Gandhi

@gandhikanishk.bsky.social

PhD Student Stanford w/ Noah Goodman, studying reasoning, discovery, and interaction. Trying to build machines that understand people. StanfordNLP, Stanford AI Lab

831 Followers  |  334 Following  |  18 Posts  |  Joined: 17.12.2023  |  2.0152

Latest posts by gandhikanishk.bsky.social on Bluesky

How can we combine the process-level insight that think-aloud studies give us with the large scale that modern online experiments permit? In our new CogSci paper, we show that speech-to-text models and LLMs enable us to scale up the think-aloud method to large experiments!

25.06.2025 05:32 β€” πŸ‘ 22    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Can we record and study human chains of thought? Check out our new work led by @danielwurgaft.bsky.social and @benpry.bsky.social !!

25.06.2025 18:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Some absolutely marvellous work from @gandhikanishk.bsky.social et al! Wow!

11.03.2025 15:57 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcemen...

13/13 Paper at arxiv.org/abs/2503.01307

04.03.2025 18:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

12/13 Would also like to thank Charlie Snell, Dimitris Papailiopoulos, Eric Zelikman, Alex Havrilla, Rafael Rafaelov, @upiter.bsky.social and Archit Sharma for discussions about the magic and woes of RL training with LLMs.

04.03.2025 18:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

11/13 Work with amazing collaborators Ayush Chakravarthy, Anikait Singh, Nathan Lile and @noahdgoodman.bsky.social

04.03.2025 18:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

10/13 This paper gives us some clues as to what facilitated self-improvement in the recent generation of LLMs and what kind of data enables it. The key lies in exploration of the right behaviors!

04.03.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

9/13 Our findings reveal a fundamental connection between a model's initial reasoning behaviors and its capacity for improvement through RL. Models that explore verification, backtracking, subgoals, and backward chaining are primed for success.

04.03.2025 18:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

8/13 By curating an extended pretraining set to amplify them, we enable Llama to match Qwen's improvement.

04.03.2025 18:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

7/13 Can we apply these insights to pretraining? We analyze math pretraining sets like OpenWebMath & FineMath, finding these key behaviors are quite rare.

04.03.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

6/13 Empty and length matched empty chain-of-thought priming fails to produce improvement, reverting models to baseline performance. This shows it's the specific cognitive behaviors, not just longer outputs, enabling learning.

04.03.2025 18:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

5/13 Crucially, the reasoning patterns matter more than having correct answers. Models primed with incorrect solutions that demonstrate the right cognitive behaviors still show substantial improvement. The behaviors are key.

04.03.2025 18:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

4/13 We curate priming datasets with different behavior combinations and find that models primed with backtracking and verification consistently improve. Interestingly, RL selectively amplifies the most useful behaviors for reaching the goal.

04.03.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

3/13 Can we change a model's initial properties to enable improvement? Yes! After "priming" Llama, by finetuning on examples demonstrating these behaviors, it starts improving from RL just like Qwen. The priming jumpstarts the learning process.

04.03.2025 18:15 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

2/13 We identify 4 key cognitive behaviors that enable successful learning: Verification (checking work), Backtracking (trying new approaches), Subgoal Setting (breaking problems down) & Backward Chaining (working backwards from a goal). Qwen naturally exhibits these, while Llama mostly lacks them.

04.03.2025 18:15 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Post image

1/13 New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧡

04.03.2025 18:15 β€” πŸ‘ 56    πŸ” 17    πŸ’¬ 2    πŸ“Œ 3

emotionally, i’m constantly walking into a glass door

19.02.2025 04:44 β€” πŸ‘ 40    πŸ” 7    πŸ’¬ 4    πŸ“Œ 0
a romantic toaster presenting a single red rose

a romantic toaster presenting a single red rose

Can Large Language Models THINK and UNDERSTAND? The answer from cognitive science is, of course, lolwut YES!

The more interesting question is CAN TOASTERS LOVE? Intriguingly, the answer is ALSO YES! And they love YOU

19.01.2025 12:39 β€” πŸ‘ 102    πŸ” 20    πŸ’¬ 4    πŸ“Œ 7
Post image

They present a scientifically optimized recipe of β€œPasta alla Cacio e pepe” based on their findings, enabling a consistently flawless execution of this classic dish.

"Phase behavior of Cacio and Pepe sauce"

arxiv.org/abs/2501.00536

06.01.2025 23:47 β€” πŸ‘ 24    πŸ” 1    πŸ’¬ 0    πŸ“Œ 2

These are actually good? No blatant physics violations at least? Definitely better than I expected

18.12.2024 05:53 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Actually can you try it with objects that it might have actually seen? Like a blue book falling on a tennis ball? I feel like in abstract prompts like these material properties are underspecified.

18.12.2024 03:08 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The broader spectrum of in-context learning The ability of language models to learn a task from a few examples in context has generated substantial interest. Here, we provide a perspective that situates this type of supervised few-shot learning...

What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7

10.12.2024 18:17 β€” πŸ‘ 123    πŸ” 31    πŸ’¬ 2    πŸ“Œ 1

I'll be at Neurips this week :) looking forward to catching up with folks! Please reach out if you want to chat!!

09.12.2024 05:26 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Oo can you add me?

22.11.2024 00:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Okay the people requested one so here is an attempt at a Computational Cognitive Science starter pack -- with apologies to everyone I've missed! LMK if there's anyone I should add!

go.bsky.app/KDTg6pv

11.11.2024 17:27 β€” πŸ‘ 223    πŸ” 92    πŸ’¬ 71    πŸ“Œ 3

I am not actively looking for people this cycle, but re-sharing in case of relevance to others

12.11.2024 00:25 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Told my kids about the liar's paradox today and, I'm not lying, they didn't believe me.

17.12.2023 16:00 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@gandhikanishk is following 20 prominent accounts