Edoardo Ponti's Avatar

Edoardo Ponti

@edoardo-ponti.bsky.social

Assistant professor in Natural Language Processing at the University of Edinburgh and visiting professor at NVIDIA | A Kleene star shines on the hour of our meeting.

1,261 Followers  |  78 Following  |  27 Posts  |  Joined: 17.11.2024  |  1.8665

Latest posts by edoardo-ponti.bsky.social on Bluesky

Post image Post image

Up next on stage, Dr. @edoardo-ponti.bsky.social ( @edinburgh-uni.bsky.social / NVIDIA)
๐ŸŽค โ€œAdaptive Units of Computation: Towards Sublinear-Memory and Tokenizer-Free Foundation Modelsโ€

Fascinating glimpse into the next gen of foundation models.

#FoundationModels #NLP #TokenizerFree #ADSAI2025

09.06.2025 13:16 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thanks to the amazing collaborators Adrian ลaล„cucki, Konrad Staniszewski, and Piotr Nawrot!

It was amazing to spend a year at NVIDIA as a visiting professor!

arXiv: arxiv.org/pdf/2506.05345

Code and models coming soon!

06.06.2025 12:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ† We evaluate inference-time hyper-scaling on DeepSeek R1-distilled models of different sizes, increasing accuracy on maths, science, and coding by up to 15 points for a given budget.

06.06.2025 12:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ’กThe idea behind DMS is to *train* existing LLMs to evict tokens from the KV cache, while delaying the eviction some time after the decision.

This allows LLMs to preserve information while reducing latency and memory size.

06.06.2025 12:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โš–๏ธ The magic works only if accuracy is preserved even at high compression ratios.

Enter Dynamic Memory Sparsification (DMS), which achieves 8x KV cache compression with 1K training steps and retains accuracy better than SOTA methods.

06.06.2025 12:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿš€ By *learning* to compress the KV cache in Transformer LLMs, we can generate more tokens for the same compute budget.

This unlocks *inference-time hyper-scaling*

For the same runtime or memory load, we can boost LLM accuracy by pushing reasoning even further!

06.06.2025 12:33 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning ๐Ÿš€

Read more ๐Ÿ‘‡

21.05.2025 10:57 โ€” ๐Ÿ‘ 92    ๐Ÿ” 27    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 6
Preview
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs Sparse attention offers a promising strategy to extend long-context capabilities in Transformer LLMs, yet its viability, its efficiency-accuracy trade-offs, and systematic scaling studies remain unexp...

Code: github.com/PiotrNawrot/...

Paper: arxiv.org/abs/2504.17768

Thanks to the lead author, Piotr Nawrot, and all the amazing collaborators!

25.04.2025 15:39 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

4) Finally, we introduce novel scaling laws for sparse attention and validate them on held-out results: evidence that our findings will likely hold true broadly.

Our insights demonstrate that sparse attention will play a key role in next-generation foundation models.

25.04.2025 15:39 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

3) There is no single best strategy across tasks and phases.

However, on average Verticals-Slashes for prefilling and Quest for decoding are the most competitive. Context-aware, and highly adaptive variants are preferable.

25.04.2025 15:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

2) Sparsity attainable while statistically guaranteeing accuracy preservation is higher during decoding โœ๏ธ than prefilling ๐Ÿง , and correlates with model size in the former.

Importantly, for most settings there is at least one degraded task, even at moderate compressions (<5x).

25.04.2025 15:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

1) For very long sequences, *larger and highly sparse models* are preferable to small, dense ones for the same FLOPS budget.

This suggests a strategy shift where scaling up model size must be combined with sparse attention to achieve an optimal trade-off.

25.04.2025 15:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Sparse attention is one of the most promising strategies to unlock long-context processing and long-generation reasoning in LLMs.

We performed the most comprehensive study on training-free sparse attention to date.

Here is what we found:

25.04.2025 15:39 โ€” ๐Ÿ‘ 24    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿš€ Excited to welcome Dr. @edoardo-ponti.bsky.social to #ADSAI2025! Lecturer in NLP @edinburghuni.bsky.social , Affiliated Lecturer @cambridgeuni.bsky.social & Visiting Prof NVIDIA.
๐ŸŽŸ๏ธ Tickets for Advances in Data Science & AI Conference 2025 are live!
๐Ÿ”—Secure your spot: tinyurl.com/yurknk7y
#AI

01.04.2025 13:44 โ€” ๐Ÿ‘ 4    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Image illustrating that ALM can enable Ensembling, Transfer to Bytes, and general Cross-Tokenizer Distillation.

Image illustrating that ALM can enable Ensembling, Transfer to Bytes, and general Cross-Tokenizer Distillation.

We created Approximate Likelihood Matching, a principled (and very effective) method for *cross-tokenizer distillation*!

With ALM, you can create ensembles of models from different families, convert existing subword-level models to byte-level and a bunch more๐Ÿงต

02.04.2025 06:36 โ€” ๐Ÿ‘ 26    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I have a scholarship for a PhD in efficient memory and tokenization in LLM architectures at
@edinburgh-uni.bsky.social!

Eligibility: UK home fee status

Starting date: flexible, from July 2025 onwards.

informatics.ed.ac.uk/study-with-u...

Please contact me if you're interested!

31.01.2025 12:20 โ€” ๐Ÿ‘ 6    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Dynamic Memory Compression | NVIDIA Technical Blog Despite the success of large language models (LLMs) as general-purpose AI tools, their high demand for computational resources make their deployment challenging in many real-world scenarios.

Code and models for Dynamic Memory Compression are finally available!

Stay tuned for architectures with even more efficient inference.

developer.nvidia.com/blog/dynamic...

31.01.2025 12:00 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

We're hiring a lecturer or reader in embodied NLP at the University of Edinburgh!

Deadline: 31 Jan 2025
Call for applications: elxw.fa.em3.oraclecloud.com/hcmUI/Candid...

22.12.2024 09:46 โ€” ๐Ÿ‘ 29    ๐Ÿ” 11    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
A Grounded Typology of Word Classes Hosted on the Open Science Framework

What's in the future?
- Richer proxies for meaning, including a temporal dimension and internal agent states
- The study of grammaticalization under the lens of groundedness

We release an extensive dataset to support these studies: osf.io/bdhna/

20.12.2024 20:30 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We focus on the groundedness of lexical classes and find that it
- follows a continuous cline cross-linguistically: nouns > adjectives > verbs
- is non-zero even for functional classes (e.g., adpositions)
- is contextual, so agrees with psycholinguistic norms only in part

20.12.2024 20:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We leverage advances in multilingual and multimodal foundation models to quantify their surprisal for both form alone and form given function

Their difference (pointwise mutual information) corresponds to the groundedness of a word: the remaining surprisal once function is known

20.12.2024 20:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

**Grounded typology**: a new paradigm.

Traditionally, linguists posit functions to compare forms in different languages; however, these are aprioristic and partly arbitrary.

Instead, we resort to perceptual modalities (like vision) as measurable proxies for function.

20.12.2024 20:30 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Two considerations:

1) reusing / interpolating old token is reminiscent of our FOCUS baseline. Unfortunately it degrades performance as even identical tokens may change their function.

2) you incur a large overhead for calculating the co-occurrence matrix for every new tokenizer.

12.12.2024 22:55 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0


Two amazing papers from my students at #NeurIPS today:

โ›“๏ธ๐Ÿ’ฅ Switch the vocabulary and embeddings of your LLM tokenizer zero-shot on the fly (@bminixhofer.bsky.social)
neurips.cc/virtual/2024...

๐ŸŒŠ Align your LLM gradient-free with spectral editing of activations (Yifu Qiu)
neurips.cc/virtual/2024...

12.12.2024 17:45 โ€” ๐Ÿ‘ 46    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

ML is the fox?

04.12.2024 22:29 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Plus Edinburgh gets the Saltire, but Cambridge gets the Union Jack. Vexillologists are in shambles

04.12.2024 16:55 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image Post image Post image

We had a blast at this year's @ellis.eu Dagstuhl seminar on "Modular and Agentive LLMs".

Thanks everyone for participating!

28.11.2024 11:50 โ€” ๐Ÿ‘ 28    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
AI that mimics human problem solving is a big advance โ€“ but comes with new risks and problems Chain of thought reasoning has been used in OpenAIโ€™s new AI model.

Check out this piece on Strawberry ๐Ÿ“/o1 we just authored on TheConversation! theconversation.com/ai-that-mimi... with @edoardo-ponti.bsky.social and Kolya ๐Ÿš€

26.11.2024 07:50 โ€” ๐Ÿ‘ 7    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Several incredible NeurIPS tutorials this year. Worth navigating through the Swifties.

21.11.2024 22:01 โ€” ๐Ÿ‘ 39    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thanks, Sasha! Your tutorials are a great source of inspiration for us.

22.11.2024 11:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@edoardo-ponti is following 20 prominent accounts