Deadline in just under two weeks!
31.01.2026 00:14 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0@pentagonalize.bsky.social
Deadline in just under two weeks!
31.01.2026 00:14 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0Thank you on behalf of the organizing committee: Robert Frank, Lena Strobl, Dana Angluin, Timos Antonopoulos, Arman Cohan, Tom McCoy, Ruzica Piskac, Andy Yang
19.12.2025 02:58 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Location: Yale University, New Haven, Connecticut, USA
Workshop date: May 11-13, 2026
Abstract submissions due: February 12, 2026
Website: flann.cs.yale.edu
Contact: flann@cs.yale.edu
More information to come!
Announcing the first Workshop on Formal Languages and Neural Networks (FLaNN)!
We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs).
Read the cookbook: arxiv.org/abs/2510.00368
Join us for weekly seminars on formal language theory, ML, NLP, and more: flannseminars.github.io
Thanks to all the chefs: @ccwatson.bsky.social, @antonxue.bsky.social, @satwik77.bsky.social, @ll4r3n4.bsky.social, @lambdaviking.bsky.social, Emile Dos Santos Ferreira, @anejsvete.bsky.social, @dchiang.bsky.social
03.10.2025 16:24 โ ๐ 2 ๐ 2 ๐ฌ 1 ๐ 0There is no better way to understand what transformers can do than to get your hands dirty and construct them, weight-by-weight. The Transformer Cookbook provides a guide for anyone aiming to understand the expressive power of transformers on such a formal level.
03.10.2025 16:24 โ ๐ 1 ๐ 2 ๐ฌ 1 ๐ 0We present The Transformer Cookbook: a collection of recipes for programming algorithms directly into transformers!
Hungry for an induction head? Craving a Dyck language recognizer? We show you step-by-step how to cook up transformers for these algorithms and many more!
New paper and two not-so-new papers on arXiv about transformer expressivity: (1) With @pentagonalize and Dana Angluin, "Simulating Hard Attention Using Soft Attention" arxiv.org/abs/2412.09925
23.12.2024 22:55 โ ๐ 3 ๐ 1 ๐ฌ 2 ๐ 0