's Avatar

@pentagonalize.bsky.social

10 Followers  |  19 Following  |  8 Posts  |  Joined: 21.11.2024  |  1.2924

Latest posts by pentagonalize.bsky.social on Bluesky

Deadline in just under two weeks!

31.01.2026 00:14 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thank you on behalf of the organizing committee: Robert Frank, Lena Strobl, Dana Angluin, Timos Antonopoulos, Arman Cohan, Tom McCoy, Ruzica Piskac, Andy Yang

19.12.2025 02:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FLaNN Workshop 2026

Location: Yale University, New Haven, Connecticut, USA
Workshop date: May 11-13, 2026
Abstract submissions due: February 12, 2026
Website: flann.cs.yale.edu
Contact: flann@cs.yale.edu

More information to come!

19.12.2025 02:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Announcing the first Workshop on Formal Languages and Neural Networks (FLaNN)!

We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs).

19.12.2025 02:58 โ€” ๐Ÿ‘ 10    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

Read the cookbook: arxiv.org/abs/2510.00368

Join us for weekly seminars on formal language theory, ML, NLP, and more: flannseminars.github.io

03.10.2025 16:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks to all the chefs: @ccwatson.bsky.social, @antonxue.bsky.social, @satwik77.bsky.social, @ll4r3n4.bsky.social, @lambdaviking.bsky.social, Emile Dos Santos Ferreira, @anejsvete.bsky.social, @dchiang.bsky.social

03.10.2025 16:24 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

There is no better way to understand what transformers can do than to get your hands dirty and construct them, weight-by-weight. The Transformer Cookbook provides a guide for anyone aiming to understand the expressive power of transformers on such a formal level.

03.10.2025 16:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
The Transformer Cookbook We present the transformer cookbook: a collection of techniques for directly encoding algorithms into a transformer's parameters. This work addresses the steep learning curve of such endeavors, a prob...

We present The Transformer Cookbook: a collection of recipes for programming algorithms directly into transformers!

Hungry for an induction head? Craving a Dyck language recognizer? We show you step-by-step how to cook up transformers for these algorithms and many more!

03.10.2025 16:24 โ€” ๐Ÿ‘ 5    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Simulating Hard Attention Using Soft Attention We study conditions under which transformers using soft attention can simulate hard attention, that is, effectively focus all attention on a subset of positions. First, we examine several variants of ...

New paper and two not-so-new papers on arXiv about transformer expressivity: (1) With @pentagonalize and Dana Angluin, "Simulating Hard Attention Using Soft Attention" arxiv.org/abs/2412.09925

23.12.2024 22:55 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

@pentagonalize is following 17 prominent accounts