Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset Paper • 2307.04657 • Published Jul 10, 2023 • 6