LLMs excel at finding surprising “needles” in very long documents, but can they detect when information is conspicuously missing?
🫥AbsenceBench🫥 shows that even SoTA LLMs struggle on this task, suggesting that LLMs have trouble perceiving “negative spaces”.
Paper: arxiv.org/abs/2506.11440
🧵[1/n]
New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt matter to the task representation formed by a language model? Stick around to find out 1/N
8/8 Big thanks to my co-authors Alireza, Xinting, and Mark! 🔗 Read the paper: arXiv.org/abs/2502.02393
7/8 Takeaway: There’s no free lunch: long CoTs are information-theoretically necessary for these tasks. Any compression tricks will hit these hard limits unless we revamp the model architecture itself.
6/8 Graph reachability: Given a DAG and two nodes, is there a path? You might try to “cheat” with clever CoTs—but in fact you need Ω(|E| log |V|) steps, matching BFS’s runtime (plus indexing overhead).
5/8 Multiplication: LLMs notoriously flub middle digits of big-integer products. Those digits hinge on all input bits, so you get a linear CoT lower bound. Our best upper bound uses Schönhage–Strassen in O(n log n) steps— closing the log(n) gap is open.
4/8 Regular languages: Hard-attention Transformers without CoT sit in AC⁰. To recognize anything beyond AC⁰—e.g., Parity—you need at least linear-length CoTs. Sublinear won’t cut it.
3/8 Yes! We derive tight lower bounds for several algorithmic tasks. The theory assumes hard attention, but soft-attention experiments paint the same picture.
2/8 Prior work shows that CoT-equipped Transformers can simulate P-time computations via polynomial-length traces. But those broad results don’t pin down exactly how long you need to solve a specific problem. We ask: can we get fine-grained lower bounds on CoT length?
Chain-of-Thought (CoT) reasoning lets LLMs solve complex tasks, but long CoTs are expensive. How short can they be while still working? Our new ICML paper tackles this foundational question.
Check out our new paper on unlearning for LLMs 🤖. We show that *not all data are unlearned equally* and argue that future work on LLM unlearning should take properties of the data to be unlearned into account. This work was lead by my intern @a-krishnan.bsky.social
🔗: arxiv.org/abs/2504.05058
Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social
arxiv.org/abs/2412.20292
Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9
Just read this, neat paper! I really enjoyed Figure 3 illustrating the basic idea: Suppose you train a diffusion model where the denoiser is restricted to be "local" (each pixel i only depends on its 3x3 neighborhood N(i)). The optimal local denoiser for pixel i is E[ x_0[i] | x_t[ N(i) ] ]...cont