Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)'s Avatar

Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)

@rao2z.bsky.social

AI researcher & teacher at SCAI, ASU. Former President of AAAI & Chair of AAAS Sec T. Here to tweach #AI. YouTube Ch: http://bit.ly/38twrAV Twitter: rao2z

877 Followers  |  16 Following  |  73 Posts  |  Joined: 14.11.2024  |  1.8715

Latest posts by rao2z.bsky.social on Bluesky

Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Proofs are not reasoning traces & I/O Format Language shouldn't be much of an issue for LLMs #SundayHarangue (Special IMO edition). 1/ My feed these last couple of days of IMO discussions has been full of comments that seem to conflate LRM intermediate tokens (aka reasoning" / X Proofs are not reasoning traces & I/O Format Language shouldn't be much of an issue for LLMs #SundayHarangue (Special IMO edition). 1/ My feed these last couple of days of IMO discussions has been full of comments that seem to conflate LRM intermediate tokens (aka reasoning

Proofs are not reasoning traces & I/O Format Language shouldn't be much of an issue for LLMs + other things #SundayHarangue (Special IMO edition). 🧵 👇

x.com/rao2z/status...

22.07.2025 13:45 — 👍 4    🔁 1    💬 0    📌 0
Preview
Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure--which is needed for true discoveries. | Subbarao Kambhampati Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure--which is needed for true discoveries. Both are beholden to the collected knowledge of the humanity (whether de...

Both LLMs and LRMs are upper bounded by humanity's knowledge closure. True scientific discoveries are, by definition, outside of that closure. Ergo, LLMs/LRMs are great force multipliers to us; but don't support "Nobel this weekend" hype..

👉 www.linkedin.com/posts/subbar...

19.07.2025 22:18 — 👍 9    🔁 2    💬 0    📌 0
Post image

Computational Complexity is the wrong measure for LRMs (as it was for LLMs)--think distributional distance instead #SundayHarangue (yes, we're back!)

👉 x.com/rao2z/status...

13.07.2025 21:42 — 👍 2    🔁 0    💬 0    📌 0

A̶̶̶I̶̶̶ ̶ ̶ ̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)̶
̶̶̶A̶̶̶G̶̶̶I̶̶̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶G̶e̶n̶e̶r̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)̶
̶̶̶A̶̶̶S̶̶̶I̶̶̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶S̶u̶p̶e̶r̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)
ASDI (Artificial Super Duper Intelligence)

Don't get stuck with yesterday's hypeonyms!
Dare to get to the next level!

#AIAphorisms

23.06.2025 22:36 — 👍 2    🔁 1    💬 0    📌 0
Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Some of what that recent Apple LRM limitations paper shows is known (pardon my friendly Schmidhubering; I do welcome more LLM studies with scientific skepticism). Our study 👇 from Sep 2024 shows o1 accuracy degrading as complexity increases.. 1/ https://t.co/d8zEUGi4SZ" / X Some of what that recent Apple LRM limitations paper shows is known (pardon my friendly Schmidhubering; I do welcome more LLM studies with scientific skepticism). Our study 👇 from Sep 2024 shows o1 accuracy degrading as complexity increases.. 1/ https://t.co/d8zEUGi4SZ

This series of lectures was given the same week there was all that brouhaha over the Apple illusion paper (I was giving these lectures during the day and talking to reporters in the evening 😅). As such they are pretty up-to-date! 3/

x.com/rao2z/status...

19.06.2025 22:27 — 👍 0    🔁 0    💬 0    📌 0
Post image

The lectures start with a "big picture" overview (Lecture 1); focus on standard LLMs and their limitations, and LLM-Modulo as a test-time scaling approach (Lecture 2); and end with a critical appraisal of the test-time scaling and RL post-training techniques (Lecture 3). 2/

19.06.2025 22:27 — 👍 0    🔁 0    💬 1    📌 0
Preview
ACDL Summer School Lectures on Planning/Reasoning Abilities of LLMs/LRMs - YouTube

For anyone interested, here are the videos of the three ~50min each lectures on the reasoning/planning capabilities of LLMs/LRMs that I gave at #ACDL2025 in Riva Del Sole resort last week. 1/

www.youtube.com/playlist?lis...

19.06.2025 22:27 — 👍 3    🔁 2    💬 1    📌 0

...it basically confirmed what is already well-established: LLMs (& LRMs & "LLM agents") have trouble w/ problems that require many steps of reasoning/planning.

See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)

09.06.2025 22:53 — 👍 52    🔁 5    💬 2    📌 0

An AGI-wannabe reasoning model whining that it couldn't handle a problem because its context window isn't big enough is like a superman-wannabe little kid protesting that he couldn't add those numbers because he doesn't have enough fingers and toes.. #AIAphorisms

16.06.2025 00:47 — 👍 3    🔁 0    💬 0    📌 0
Preview
Beyond Semantics: The Unreasonable Effectiveness of Reasonless Intermediate Tokens Recent impressive results from large reasoning models have been interpreted as a triumph of Chain of Thought (CoT), and especially of the process of training on CoTs sampled from base LLMs in order to...

"our counter-intuitive results demonstrate ways in which common interpretations of Large Reasoning Models may be anthropomorphizations or simplifications" arxiv.org/abs/2505.13775

01.06.2025 13:30 — 👍 55    🔁 11    💬 2    📌 1
Preview
Lucas Saldyt on X: "Neural networks can express more than they learn, creating expressivity-trainability gaps. Our paper, “Mind The Gap,” shows neural networks best learn parallel algorithms, and analyzes gaps in faithfulness and effectiveness. @rao2z https://t.co/8YjxPkXFu0" / X Neural networks can express more than they learn, creating expressivity-trainability gaps. Our paper, “Mind The Gap,” shows neural networks best learn parallel algorithms, and analyzes gaps in faithfulness and effectiveness. @rao2z https://t.co/8YjxPkXFu0

The transformer expressiveness results are often a bit of a red herring as there tends to be a huge gap between what can be expressed in transformers, and what can be learned with gradient descent. Mind the Gap, a new paper with
Lucas Saldyt dives deeper into this issue 👇👇

x.com/SaldytLucas/...

30.05.2025 13:59 — 👍 3    🔁 0    💬 0    📌 1
Post image

Anthropomorphization of intermediate tokens as reasoning/thinking traces isn't quite a harmless fad, and may be pushing LRM research into questionable directions.. So we decided to put together a more complete argument. Paper 👉 arxiv.org/pdf/2504.09762 (Twitter thread: x.com/rao2z/status...)

28.05.2025 13:41 — 👍 10    🔁 1    💬 0    📌 1
Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "This RLiNo? paper lead by @soumya_samineni & @durgesh_kalwar dives into the MDP model used in the RL post-training methods inspired by DeepSeek R1, and asks if some of the idiosyncrasies of RL in R1 aren't just consequences of the simplistic structural assumptions in the MDP 🧵1/ https://t.co/qPBY3tJILE" / X This RLiNo? paper lead by @soumya_samineni & @durgesh_kalwar dives into the MDP model used in the RL post-training methods inspired by DeepSeek R1, and asks if some of the idiosyncrasies of RL in R1 aren't just consequences of the simplistic structural assumptions in the MDP 🧵1/ https://t.co/qPBY3tJILE

Longer thread here

x.com/rao2z/status...

25.05.2025 22:51 — 👍 0    🔁 0    💬 0    📌 0
Post image

This RLiNo? paper (arxiv.org/abs/2505.13697) lead by Soumya Samineni and Durgesh_kalwar dives into the MDP model used in the RL post-training methods inspired by DeepSeek R1, and asks if some of the idiosyncrasies of RL aren't just consequences of the simplistic structural assumptions made

25.05.2025 22:51 — 👍 4    🔁 0    💬 1    📌 0
Post image

Do Intermediate Tokens Produced by LRMs (need to) have any semantics? Our new study 👇

Thread 👉 x.com/rao2z/status...

21.05.2025 20:08 — 👍 8    🔁 0    💬 2    📌 0
Post image

Delighted to share that Siddhant Bhambri & Mudit Verma's
critical evaluation and refutation of the reasoning claims of ReACT has been accepted to #TMLR (Transactions on Machine Learning)

👉https://openreview.net/forum?id=aFAMPSmNHR

13.05.2025 17:22 — 👍 4    🔁 1    💬 1    📌 0
Preview
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Solving Single Agent Fully Observable Deterministic (SAFODP) Problems with Dec-POMDP approaches #SundayHarangue #allegory Imagine you modeled your decision problem into a Dec-POMDP problem ('cuz that's as expressive a decision model as you can get! )--but with some https://t.co/vDWYTHbnQA" / X Solving Single Agent Fully Observable Deterministic (SAFODP) Problems with Dec-POMDP approaches #SundayHarangue #allegory Imagine you modeled your decision problem into a Dec-POMDP problem ('cuz that's as expressive a decision model as you can get! )--but with some https://t.co/vDWYTHbnQA

Solving Single Agent Fully Observable Deterministic (SAFODP) Problems with Dec-POMDP approaches #SundayHarangue #allegory

x.com/rao2z/status...

12.05.2025 00:45 — 👍 2    🔁 0    💬 0    📌 1

IMHO, the whole idea of connecting "length of intermediate tokens" produced by LRMs to inference time compute is a mind-boggling demonstration of circular reasoning--that comes from the assumptions about MDP model and reward model.. 👇

x.com/rao2z/status...

09.05.2025 14:42 — 👍 3    🔁 0    💬 1    📌 0
Post image

It ain't "The Bitter Lesson" if you are in the loop curating the training data for your LLM, y'all.. Pick your lesson, will ya? #SundayHarangue (h/t @kstechly.bsky.social)

05.05.2025 11:44 — 👍 3    🔁 3    💬 0    📌 0
Preview
(How) Do reasoning models reason? We will provide a broad unifying perspective on the recent breed of Large Reasoning Models (LRMs) such as OpenAI o1 and DeepSeek R1, including their promise, sources of power, misconceptions and limit...

Don't use summarizers for the papers by @rao2z.bsky.social because the reasoning traces therein are, unlike the LRMs & LLMs under investigation, substantively meaningful, semantically well-ordered, and stylistically compelling and engaging!
#AI #LLMs #CoT
arxiv.org/abs/2504.09762

19.04.2025 19:23 — 👍 8    🔁 2    💬 1    📌 0
(How) Do LLMs Reason/Plan? (Talk given at Microsoft Research; 4/11/25)
YouTube video by Subbarao Kambhampati (How) Do LLMs Reason/Plan? (Talk given at Microsoft Research; 4/11/25)

Here is a recording of my talk at @msftresearch.bsky.social last week titled "(How) Do LLMs Reason/Plan?" (Also gave a version of it at as a distinguished lecture at Oracle today..)

www.youtube.com/watch?v=0u2h...

16.04.2025 00:38 — 👍 5    🔁 1    💬 0    📌 0

A preprint available at arxiv.org/abs/2504.09762

15.04.2025 17:23 — 👍 3    🔁 0    💬 0    📌 1

(With @kstechly.bsky.social & Karthik Valmeekam)

13.04.2025 17:39 — 👍 0    🔁 0    💬 0    📌 0
Post image Post image Post image

Our invited commentary for the Annals of
New York Academy of Sciences titled "(How) Do reasoning models reason?" is now online

👉 nyaspubs.onlinelibrary.wiley.com/doi/epdf/10....

It is a written version of my recent talks (and #SundayHarangues) on the recent developments in LRMs..

13.04.2025 17:37 — 👍 2    🔁 2    💬 1    📌 1
Post image

Woo hoo.. Our first #TMLR paper!🤗 On the planning and scheduling abilities of LRMs o1 & R1 (w/ Karthik, Kaya, Atharva)

👉 openreview.net/forum?id=FkK...

Even a jaded researcher like me has to admit that
Transactions on Machine Learning Research is a veritable oasis among #AI publication venues! 🙏

09.04.2025 14:28 — 👍 7    🔁 1    💬 0    📌 0

AI Hype: The phenomenon where experts without expertise hype up imminent arrival of expertise without experts. #AIAphorisms

30.03.2025 08:26 — 👍 10    🔁 4    💬 0    📌 0
Post image

Test-time-scaling, Post-training and Distillation are just compiling the verifier signal into the LLM at different phases #SundayHarangue

See 👉 x.com/rao2z/status...

Or 👉 www.linkedin.com/posts/subbar...

16.03.2025 22:10 — 👍 6    🔁 0    💬 0    📌 0
Post image

Pushing for "human-sounding" traces that have no semantic standing engenders false (undeserved) confidence for the end users. If end accuracy is all you care for, there is no obvious reason to stick to "human-sounding" traces--Let RL be RL--and learn its own prompt augmentation language!

10.03.2025 14:01 — 👍 2    🔁 0    💬 0    📌 0
Post image

Intermediate tokens being dubbed as "Reasoning Traces" is the new anthropomorphization fashion.. See this video #SundayHarangue 👉https://youtube.com/watch?v=CQ5JS3v61Ns&list=PLNONVE5W8PCRbf3WmbcqgXPToJuA2NUfP&t=3787s that wonders whether LRMs should instead be called LMMs--Large Mumbling Models..

10.03.2025 14:00 — 👍 9    🔁 1    💬 2    📌 0
Post image Post image Post image

RL is great; but RL envy in LLMs may not be.. (or R1's SFT vs. RL is more like Batch vs. SGD)#SundayHarangue (Special Turing edition 😋) There has been a tendency in the LLM literature to dress up simplistic ideas in RL garb to gain additional respectability... 👉
x.com/rao2z/status...

07.03.2025 23:01 — 👍 3    🔁 0    💬 0    📌 0

@rao2z is following 14 prominent accounts