license: other | |
tags: | |
- moe | |
- DPO | |
- RL-TUNED | |
* [DPO Trainer](https://huggingface.co./docs/trl/main/en/dpo_trainer) with dataset Intel/orca_dpo_pairs to improve [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B] | |
``` | |
DPO Trainer | |
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. | |
``` | |