Anej Svete's Avatar

Anej Svete

@anejsvete.bsky.social

PhD student in NLP at ETH Zurich. anejsvete.github.io

236 Followers  |  126 Following  |  7 Posts  |  Joined: 21.11.2024  |  1.7334

Latest posts by anejsvete.bsky.social on Bluesky

Andy Yang, Christopher Watson, Anton Xue, Satwik Bhattamishra, Jose Llarena, William Merrill, Emile Dos Santos Ferreira, Anej Svete, David Chiang: The Transformer Cookbook https://arxiv.org/abs/2510.00368 https://arxiv.org/pdf/2510.00368 https://arxiv.org/html/2510.00368

02.10.2025 06:33 β€” πŸ‘ 0    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
The Transformer Cookbook We present the transformer cookbook: a collection of techniques for directly encoding algorithms into a transformer's parameters. This work addresses the steep learning curve of such endeavors, a prob...

We present The Transformer Cookbook: a collection of recipes for programming algorithms directly into transformers!

Hungry for an induction head? Craving a Dyck language recognizer? We show you step-by-step how to cook up transformers for these algorithms and many more!

03.10.2025 16:24 β€” πŸ‘ 5    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Introducing Astaβ€”our bold initiative to accelerate science with trustworthy, capable agents, benchmarks, & developer resources that bring clarity to the landscape of scientific AI + agents. 🧡

26.08.2025 13:05 β€” πŸ‘ 21    πŸ” 4    πŸ’¬ 3    πŸ“Œ 1
Post image

As part of Asta, our initiative to accelerate science with trustworthy AI agents, we built AstaBenchβ€”the first comprehensive benchmark to compare them. βš–οΈ

26.08.2025 15:02 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Ai2 is excited to be at #ACL2025 in Vienna, Austria this week. Come say hello, meet the team, and chat about the future of NLP. See you there! πŸ€πŸ“š

28.07.2025 17:00 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Model Release Heatmap - a Hugging Face Space by cfahlgren1 Search this app to see model release activity for any Hugging Face organization or user over time. Just enter the org name to view their heatmap.

We are #1 on the @huggingface heatmap - this is what true openness looks like!πŸ₯‡πŸŽ‰

750+ models
230+ datasets
And counting...

Come build with us

huggingface.co/spaces/cfahl...

12.06.2025 18:16 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

6/ The work refines the landscape of transformer expressivity and demonstrates that seemingly minor implementation details can have major theoretical consequences for what neural architectures can represent.

17.05.2025 14:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

5/ This might help explain why positional encodings that skew attention toward recent (rightmost) tokensβ€”like ALiBiβ€”work so well in practice. They're compensating for an inherent limitation in conventional attention mechanisms.

17.05.2025 14:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

4/ Here's why this matters: leftmost-tiebreaking transformers are actually equivalent to soft-attention transformers in terms of expressivity! This suggests they might better approximate real-world transformers than right-attention models.

17.05.2025 14:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/ Specifically, we show that leftmost tiebreaking models correspond to a strictly weaker fragment of Linear Temporal Logic (LTL). While rightmost tiebreaking enables the full power of LTL, leftmost models are limited to the "past" fragment.

17.05.2025 14:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/ We analyzed future-masked unique hard attention transformers and found that those with leftmost tiebreaking are strictly less expressive than those with rightmost tiebreaking. The "Tale of Two Sides" nicely describes about how these two models differ.

17.05.2025 14:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1/ When multiple positions achieve the maximum attention score in a transformer, we need a tiebreaking mechanism. Should we pick the leftmost or rightmost position? Turns out, this trivial implementation detail dramatically affects what transformers can express!

17.05.2025 14:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Unique Hard Attention: A Tale of Two Sides Understanding the expressive power of transformers has recently attracted attention, as it offers insights into their abilities and limitations. Many studies analyze unique hard attention transformers...

🧡 Excited to share our paper "Unique Hard Attention: A Tale of Two Sides" with Selim, Jiaoda, and Ryan, where we show that the way transformers break ties in attention scores has profound implications on their expressivity! And it got accepted to ACL! :)

The paper: arxiv.org/abs/2503.14615

17.05.2025 14:28 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Current KL estimation practices in RLHF can generate high variance and even negative values! We propose a provably better estimator that only takes a few lines of code to implement.πŸ§΅πŸ‘‡
w/ @xtimv.bsky.social and Ryan Cotterell
code: arxiv.org/pdf/2504.10637
paper: github.com/rycolab/kl-rb

06.05.2025 14:59 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

I will be at #NeurIPS2024 in Vancouver. I am excited to meet people working on AI Safety and Security. Drop a DM if you want to meet.

I will be presenting two (spotlight!) works. Come say hi to our posters.

09.12.2024 17:02 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

No joke, FLaNN is one of the most interesting servers around. Check out the website for talk information!

flann.super.site

26.11.2024 16:04 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Post image

Happy to share our work "Counterfactual Generation from Language Models" with @AnejSvete, @vesteinns, and Ryan Cotterell! We tackle generating true counterfactual strings from LMs after interventions and introduce a simple algorithm for it. (1/7) arxiv.org/pdf/2411.07180

12.11.2024 16:00 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0

@anejsvete is following 20 prominent accounts