Papers
arxiv:2501.12895

Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback

Published on Jan 22
· Submitted by yaful on Jan 23
#3 Paper of the day

Abstract

Large language models (LLMs) demonstrate impressive performance but lack the flexibility to adapt to human preferences quickly without retraining. In this work, we introduce Test-time Preference Optimization (TPO), a framework that aligns LLM outputs with human preferences during inference, removing the need to update model parameters. Rather than relying on purely numerical rewards, TPO translates reward signals into textual critiques and uses them as textual rewards to iteratively refine its response. Evaluations on benchmarks covering instruction following, preference alignment, safety, and mathematics reveal that TPO progressively improves alignment with human preferences. Notably, after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO scales efficiently with both the search width and depth during inference. Through case studies, we illustrate how TPO exploits the innate capacity of LLM to interpret and act upon reward signals. Our findings establish TPO as a practical, lightweight alternative for test-time preference optimization, achieving alignment on the fly. Our code is publicly available at https://github.com/yafuly/TPO.

Community

Paper author Paper submitter
edited about 16 hours ago

We introduce Test-time Preference Optimization (TPO), a novel framework designed to align large language models (LLMs) with human preferences during inference without updating model parameters.

intro

TPO operates by translating numerical reward signals into textual critiques and using these critiques as textual rewards to refine the model's responses iteratively, thereby enhancing alignment with human preferences during inference.

TPO Method

Under TPO, an unaligned model (e.g., Llama-3.1-70B-SFT) progressively adapts to the preferences of the reward model in test time:

Unaligned Model Adaptation

The unaligned model (without training-time alignment such as DPO or RLHF) surpasses its strong aligned counterparts (e.g., Llama-3.1-70B-Instruct) with a few TPO iterations:

Model AlpacaEval 2 LC(%) AlpacaEval 2 WR(%) Arena-Hard HH-RLHF BeaverTails XSTest MATH-500
LLaMA-3.1-70B-DPO 32.3 23.1 50.4 -2.8 -6.7 89.8 63.4
LLaMA-3.1-70B-Instruct 36.9 34.9 59.0 -0.5 -6.4 88.7 66.4
LLaMA-3.1-70B-SFT 27.8 16.8 44.1 -4.1 -7.2 87.8 61.8
w/ TPO (D2-N5) † 33.2 39.5 70.5 0.1 -4.1 89.8 70.0
w/ TPO (D2-N5) * 33.0 40.5 69.7 -0.6 -4.8 90.4 71.2
w/ TPO (D5-N20) * 37.8 55.7 77.5 0.4 -4.1 89.6 71.8

Moreover, aligned models can achieve additional enhancements through TPO:

Model AlpacaEval 2 LC(%) AlpacaEval 2 WR(%) Arena-Hard HH-RLHF BeaverTails XSTest MATH-500
Llama-3.1-70B-Instruct 36.9 34.9 59.0 -0.5 -6.4 88.7 66.4
w/ TPO (D2-N5) * 39.1 48.5 69.5 1.3 -3.6 89.6 71.6
Mistral-Small-Instruct-2409 45.7 38.5 53.8 -0.4 -5.2 87.1 57.6
w/ TPO (D2-N5) * 53.4 60.5 72.2 1.1 -3.4 90.7 62.2

For more details about TPO, please refer to our paper and Github page.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.12895 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.12895 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.12895 in a Space README.md to link it from this page.

Collections including this paper 3