Michael Hahn

Michael Hahn

@m-hahn.bsky.social

Prof at Saarland. NLP and machine learning. Theory and interpretability of LLMs. https://www.mhahn.info

93 Followers 252 Following 8 Posts Joined Jun 2024
8 months ago
Post image

LLMs excel at finding surprising “needles” in very long documents, but can they detect when information is conspicuously missing?

🫥AbsenceBench🫥 shows that even SoTA LLMs struggle on this task, suggesting that LLMs have trouble perceiving “negative spaces”.
Paper: arxiv.org/abs/2506.11440

🧵[1/n]

74 15 2 1
9 months ago
Post image

New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt matter to the task representation formed by a language model? Stick around to find out 1/N

46 7 1 2
10 months ago
Preview
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers Chain-of-thought reasoning and scratchpads have emerged as critical tools for enhancing the computational capabilities of transformers. While theoretical results show that polynomial-length scratchpad...

8/8 Big thanks to my co-authors Alireza, Xinting, and Mark! 🔗 Read the paper: arXiv.org/abs/2502.02393

0 0 0 0
10 months ago

7/8 Takeaway: There’s no free lunch: long CoTs are information-theoretically necessary for these tasks. Any compression tricks will hit these hard limits unless we revamp the model architecture itself.

0 0 1 0
10 months ago

6/8 Graph reachability: Given a DAG and two nodes, is there a path? You might try to “cheat” with clever CoTs—but in fact you need Ω(|E| log |V|) steps, matching BFS’s runtime (plus indexing overhead).

0 0 1 0
10 months ago
Post image

5/8 Multiplication: LLMs notoriously flub middle digits of big-integer products. Those digits hinge on all input bits, so you get a linear CoT lower bound. Our best upper bound uses Schönhage–Strassen in O(n log n) steps— closing the log(n) gap is open.

0 0 1 0
10 months ago

4/8 Regular languages: Hard-attention Transformers without CoT sit in AC⁰. To recognize anything beyond AC⁰—e.g., Parity—you need at least linear-length CoTs. Sublinear won’t cut it.

0 0 1 0
10 months ago

3/8 Yes! We derive tight lower bounds for several algorithmic tasks. The theory assumes hard attention, but soft-attention experiments paint the same picture.

0 0 1 0
10 months ago

2/8 Prior work shows that CoT-equipped Transformers can simulate P-time computations via polynomial-length traces. But those broad results don’t pin down exactly how long you need to solve a specific problem. We ask: can we get fine-grained lower bounds on CoT length?

0 0 1 0
10 months ago
Post image

Chain-of-Thought (CoT) reasoning lets LLMs solve complex tasks, but long CoTs are expensive. How short can they be while still working? Our new ICML paper tackles this foundational question.

12 2 2 0
11 months ago
Diagram illustrating a hypothesis about knowledge unlearning in language models. The left side shows a training corpus with varying frequencies of facts, such as 'Montreal is a city in Quebec' (high frequency) and 'Atlantis is a city in the ocean' (lower frequency). The center shows a language model being trained on this data, then undergoing unlearning. The right side demonstrates the 'Forget Quality' results, where the model more effectively unlearns the less frequent fact ('Atlantis is in Greece') while retaining the more frequent knowledge. Labels A, B, and C mark key points in the hypothesis: A (frequency variations in training data), B (influence of frequency), and C (unlearning effectiveness).

Check out our new paper on unlearning for LLMs 🤖. We show that *not all data are unlearned equally* and argue that future work on LLM unlearning should take properties of the data to be unlearned into account. This work was lead by my intern @a-krishnan.bsky.social
🔗: arxiv.org/abs/2504.05058

33 5 1 2
1 year ago
Post image

Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social
arxiv.org/abs/2412.20292
Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9

134 31 5 7
1 year ago

Just read this, neat paper! I really enjoyed Figure 3 illustrating the basic idea: Suppose you train a diffusion model where the denoiser is restricted to be "local" (each pixel i only depends on its 3x3 neighborhood N(i)). The optimal local denoiser for pixel i is E[ x_0[i] | x_t[ N(i) ] ]...cont

39 2 1 0