Tim Vieira 's Avatar

Tim Vieira

@xtimv.bsky.social

http://timvieira.github.io/blog

248 Followers  |  227 Following  |  5 Posts  |  Joined: 16.11.2024  |  1.7117

Latest posts by xtimv.bsky.social on Bluesky

Many LM applications may be formulated as text generation conditional on some (Boolean) constraint.

Generate aโ€ฆ
- Python program that passes a test suite.
- PDDL plan that satisfies a goal.
- CoT trajectory that yields a positive reward.
The list goes onโ€ฆ

How can we efficiently satisfy these? ๐Ÿงต๐Ÿ‘‡

13.05.2025 14:22 โ€” ๐Ÿ‘ 10    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Current KL estimation practices in RLHF can generate high variance and even negative values! We propose a provably better estimator that only takes a few lines of code to implement.๐Ÿงต๐Ÿ‘‡
w/ @xtimv.bsky.social and Ryan Cotterell
code: arxiv.org/pdf/2504.10637
paper: github.com/rycolab/kl-rb

06.05.2025 14:59 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

#ICLR2025 Oral

How can we control LMs using diverse signals such as static analyses, test cases, and simulations?

In our paper โ€œSyntactic and Semantic Control of Large Language Models via Sequential Monte Carloโ€ (w/ @benlipkin.bsky.social,
@alexlew.bsky.social, @xtimv.bsky.social) we:

25.04.2025 19:33 โ€” ๐Ÿ‘ 7    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

New preprint on controlled generation from LMs!

I'll be presenting at NENLP tomorrow 12:50-2:00pm

Longer thread coming soon :)

10.04.2025 19:19 โ€” ๐Ÿ‘ 19    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Tokenization is an often-overlooked aspect of modern #NLP, but itโ€™s experiencing a resurgence โ€” thanks in large part to @karpathy.bsky.social and his classic tweet:

x.com/karpathy/sta...

Come hang out with us and let's fix these problems!

10.02.2025 16:26 โ€” ๐Ÿ‘ 7    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Join the Token ##ization Discord Server! Check out the Token ##ization community on Discord - hang out with 24 other members and enjoy free voice and text chat.

Today we are launching a server dedicated to Tokenization research! Come join us!

discord.gg/CDJhnSvU

10.02.2025 16:26 โ€” ๐Ÿ‘ 18    ๐Ÿ” 6    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 3
Also note that, instead of adding KL penalty in the reward, GRPO regularizes by directly adding the KL divergence between the trained policy and the reference policy to the loss, avoiding complicating the calculation of the advantage.

Also note that, instead of adding KL penalty in the reward, GRPO regularizes by directly adding the KL divergence between the trained policy and the reference policy to the loss, avoiding complicating the calculation of the advantage.

@xtimv.bsky.social and I were just discussing this interesting comment in the DeepSeek paper introducing GRPO: a different way of setting up the KL loss.

It's a little hard to reason about what this does to the objective. 1/

10.02.2025 04:32 โ€” ๐Ÿ‘ 50    ๐Ÿ” 10    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

I made a starter pack for people in NLP working in the area of tokenization. Let me know if you'd like to be added

go.bsky.app/8P9ftjL

20.12.2024 18:37 โ€” ๐Ÿ‘ 12    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

It's ready! ๐Ÿ’ซ

A new blog post in which I list of all the tools and apps I've been using for work, plus all my opinions about them.

maria-antoniak.github.io/2024/12/30/o...

Featuring @kagi.com, @warp.dev, @paperpile.bsky.social, @are.na, Fantastical, @obsidian.md, Claude, and more.

31.12.2024 05:38 โ€” ๐Ÿ‘ 215    ๐Ÿ” 25    ๐Ÿ’ฌ 36    ๐Ÿ“Œ 4

No amazing live music at the closing ceremony...

17.12.2024 01:17 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Also, what will the role of adaptive/amortized inference be?

E.g., twisted SMC
arxiv.org/abs/2404.17546

Variational best-of-N
Our version:
arxiv.org/abs/2407.06057
Google's: arxiv.org/abs/2407.14622

16.12.2024 20:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What will the hardware-friendly search algorithms be? My top picks are best-of-N and sequential Monte Carlo because they do search without backtracking.

16.12.2024 19:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

hi everyone!! let's try this optimal transport again ๐Ÿ™ƒ

05.12.2024 12:58 โ€” ๐Ÿ‘ 329    ๐Ÿ” 31    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Also: you can also use variables (or expressions?!) for the formatting information! #Python is cool...
More details and explanation at fstring.help

21.11.2024 17:50 โ€” ๐Ÿ‘ 36    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

Bravo!

21.11.2024 16:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Very close!

21.11.2024 16:34 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Surprisal of title beginning with 'O'? 3.22
Surprisal of 'o' following 'Treatment '? 0.11
Surprisal that title includes surprisal of each title character? Priceless [...I did not know titles could do this]

21.11.2024 16:06 โ€” ๐Ÿ‘ 10    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

Happy to share our work "Counterfactual Generation from Language Models" with @AnejSvete, @vesteinns, and Ryan Cotterell! We tackle generating true counterfactual strings from LMs after interventions and introduce a simple algorithm for it. (1/7) arxiv.org/pdf/2411.07180

12.11.2024 16:00 โ€” ๐Ÿ‘ 14    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Variational approximation with Gaussian mixtures is looking cute! So here it's just gradient descent on K(q||p) for optimising the mixtures means & covariances & weights...
@lacerbi.bsky.social

20.11.2024 18:23 โ€” ๐Ÿ‘ 33    ๐Ÿ” 7    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Video thumbnail

Gaussian approximation of a target distribution: mean-field versus full-covariance! Below shows a simple gradient descent on KL(q||p)

20.11.2024 08:51 โ€” ๐Ÿ‘ 62    ๐Ÿ” 5    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2

@xtimv is following 20 prominent accounts