So you want to skip our thinning proofsโbut youโd still like our out-of-the-box attention speedups? Iโll be presenting the Thinformer at two ICML workshop posters tomorrow!
Catch me at Es-FoMo (1-2:30, East hall A) and at LCFM (10:45-11:30 & 3:30-4:30, West 202-204)
19.07.2025 07:04 โ ๐ 5 ๐ 4 ๐ฌ 0 ๐ 0
Low-Rank Thinning
The goal in thinning is to summarize a dataset using a small set of representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel Halving and Compress can match the quality of unifor...
If youโre not at ICML, you can still read our work. Our new theoretically principled algorithms beat recent baselines across multiple tasksโincluding Transformer approximation!
14.07.2025 18:29 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0
Your data is low-rank, so stop wasting compute! In our new paper on low-rank thinning, we share one weird trick to speed up Transformer inference, SGD training, and hypothesis testing at scale. Come by ICML poster W-1012 Tuesday at 4:30!
14.07.2025 18:29 โ ๐ 25 ๐ 7 ๐ฌ 2 ๐ 2
Low-Rank Thinning
The goal in thinning is to summarize a dataset using a small set of representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel Halving and Compress can match the quality of unifor...
Off to ICML next week?
Check out my student Annabelleโs paper in collaboration with @lestermackey.bsky.social and colleagues on low-rank thinning!
New theory, dataset compression, efficient attention and more:
arxiv.org/abs/2502.12063
12.07.2025 16:27 โ ๐ 11 ๐ 5 ๐ฌ 0 ๐ 1