Ziteng Sun's Avatar

Ziteng Sun

@sziteng.bsky.social

Responsible and efficient AI. Topics: LLM efficiency; LLM alignment; Differential Privacy; Information Theory. Research Scientist @Google; PhD @Cornell

46 Followers  |  31 Following  |  15 Posts  |  Joined: 11.02.2025  |  1.5955

Latest posts by sziteng.bsky.social on Bluesky

Joint work w/ wonderful colleagues at Google: Ananth Balashankar (co-lead), @jonathanberant.bsky.social @jacobeisenstein.bsky.social, Michael Collins, Adrian Hutter, Jong Lee, Chirag Nagpal, Flavien Prost , Ananda Theertha Suresh, and @abeirami.bsky.social.

11.02.2025 16:26 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Check out the paper for more details: arxiv.org/pdf/2412.19792.

11.02.2025 16:26 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We also show that our proposed reward calibration method is a strong baseline for optimizing standard win rate on all considered datasets, with comparable or better performance than other SOTA methods, demonstrating the benefits of the reward calibration step.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

For Worst-of-N, we use Anthropic harmlessness dataset, and observe similar improvements. The best improvement is achieved by an exponential transformation with a negative exponent.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

We empirically compare InfAlign-CTRL with other SOTA alignment methods. For Best-of-N, we use Anthropic helpfulness and Reddit summarization quality dataset. We show that it offers up to 3-8% improvement on inference-time win rates, achieved by an exponential transformation with a positive exponent.

11.02.2025 16:26 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We provide an analytical tool to compare InfAlign-CTRL with different transformation functions for a given inference-time procedure. We find that exponential transformations, which optimize different quantiles of the reward with different t's, achieves close-to-optimal performance for BoN and WoN.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We then particularize the study to two popular inference-time strategies, BoN sampling (BoN) and BoN jailbreaking (WoN). Despite simplicity, BoN is known to be an effective procedure for inference-time alignment and scaling. Variants of WoN are effective for evaluating safety against jailbreaks.

11.02.2025 16:26 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The reward calibration step makes the reward model more robust to its learning process. We empirically show that it could help mitigate reward hacking. The transformation function allows us to further tailor the alignment objective to different inference-time procedures.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

To enable practical solutions, we provide the calibrate-and-transform RL (InfAlign-CTRL) algorithm to solve this problem, which involves a reward calibration step and a KL-regularized reward maximization step with a transformation ะค of the calibrated reward.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We show that the optimal reward transformation satisfies a coupled-transformed reward/policy optimization objective, which lends itself to iterative optimization. However, the approach is unfortunately computationally inefficient and infeasible for real-world models.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Somewhat surprisingly, we prove that for any inference-time decoding procedure, the optimal aligned policy is the solution to the standard RLHF problem with a transformation of the reward. Therefore, the challenge can be captured by designing a suitable reward transformation.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

To characterize this, we propose a framework for inference-aware alignment (InfAlign), which aims to optimize inference-time win rate of the aligned policy. We show that the standard RLHF framework is sub-optimal in view of the above metric.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Recent works have characterized (near) optimal alignment objectives (IPO, BoN-distillation) for standard win rate. See e.g. arxiv.org/pdf/2406.00832. However, when the inference-time compute is considered, the outcomes are obtained from a distribution that depends on the inference-time procedure.

11.02.2025 16:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

RLHF generally entails training a reward model and then solving a KL-regularized reward maximization problem. The success is typically measured through the win rate of samples from the alignment model against the base model through standard sampling.

11.02.2025 16:26 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Inference-time procedures (e.g. Best-of-N, CoT) have been instrumental to recent development of LLMs. Standard RLHF focuses only on improving the trained model. This creates a train/inference mismatch.

๐˜Š๐˜ข๐˜ฏ ๐˜ธ๐˜ฆ ๐˜ข๐˜ญ๐˜ช๐˜จ๐˜ฏ ๐˜ฐ๐˜ถ๐˜ณ ๐˜ฎ๐˜ฐ๐˜ฅ๐˜ฆ๐˜ญ ๐˜ต๐˜ฐ ๐˜ฃ๐˜ฆ๐˜ต๐˜ต๐˜ฆ๐˜ณ ๐˜ด๐˜ถ๐˜ช๐˜ต ๐˜ข ๐˜จ๐˜ช๐˜ท๐˜ฆ๐˜ฏ ๐˜ช๐˜ฏ๐˜ง๐˜ฆ๐˜ณ๐˜ฆ๐˜ฏ๐˜ค๐˜ฆ-๐˜ต๐˜ช๐˜ฎ๐˜ฆ ๐˜ฑ๐˜ณ๐˜ฐ๐˜ค๐˜ฆ๐˜ฅ๐˜ถ๐˜ณ๐˜ฆ?

Check out below.

11.02.2025 16:26 โ€” ๐Ÿ‘ 25    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4

@sziteng is following 20 prominent accounts