Papers
arxiv:2310.12773

Safe RLHF: Safe Reinforcement Learning from Human Feedback

Published on Oct 19, 2023
Β· Submitted by akhaliq on Oct 20, 2023
#3 Paper of the day
Authors:
,
,

Abstract

With the development of large language models (LLMs), striking a balance between the performance and safety of AI systems has never been more critical. However, the inherent tension between the objectives of helpfulness and harmlessness presents a significant challenge during LLM training. To address this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly decouples human preferences regarding helpfulness and harmlessness, effectively avoiding the crowdworkers' confusion about the tension and allowing us to train separate reward and cost models. We formalize the safety concern of LLMs as an optimization task of maximizing the reward function while satisfying specified cost constraints. Leveraging the Lagrangian method to solve this constrained problem, Safe RLHF dynamically adjusts the balance between the two objectives during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we demonstrate a superior ability to mitigate harmful responses while enhancing model performance compared to existing value-aligned algorithms. Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with collected human preferences, significantly improving its helpfulness and harmlessness according to human evaluations.

Community

This is a step in the right direction. The problem with RLHF, is the H component. Many different ideologies are competing and are averaged out. Look at the left/right "culture war" and you can see how ineffective this is. Current world affairs showcase just how deep the divide is.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

hi @mattbarr , librarian-bot is managed by HF Staff and aims to recommend additional papers to our users. You can find more information about it on the following page.: https://huggingface.co./librarian-bots
Let me know if you need more explanation.
Best
Rom

This comment has been hidden

Enhancing AI Safety with Safe RLHF: Balancing Helpfulness and Harmlessness

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 12

Browse 12 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2310.12773 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 10