# Trainer | |
At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper "Fine-Tuning Language Models from Human Preferences" by D. Ziegler et al. [[paper](https://huggingface.co./papers/1909.08593), [code](https://github.com/openai/lm-human-preferences)]. | |
The Trainer and model classes are largely inspired from `transformers.Trainer` and `transformers.AutoModel` classes and adapted for RL. | |
We also support a `RewardTrainer` that can be used to train a reward model. | |
## CPOConfig | |
[[autodoc]] CPOConfig | |
## CPOTrainer | |
[[autodoc]] CPOTrainer | |
## DDPOConfig | |
[[autodoc]] DDPOConfig | |
## DDPOTrainer | |
[[autodoc]] DDPOTrainer | |
## DPOTrainer | |
[[autodoc]] DPOTrainer | |
## IterativeSFTTrainer | |
[[autodoc]] IterativeSFTTrainer | |
## KTOConfig | |
[[autodoc]] KTOConfig | |
## KTOTrainer | |
[[autodoc]] KTOTrainer | |
## ORPOConfig | |
[[autodoc]] ORPOConfig | |
## ORPOTrainer | |
[[autodoc]] ORPOTrainer | |
## PPOConfig | |
[[autodoc]] PPOConfig | |
## PPOTrainer | |
[[autodoc]] PPOTrainer | |
## RewardConfig | |
[[autodoc]] RewardConfig | |
## RewardTrainer | |
[[autodoc]] RewardTrainer | |
## SFTTrainer | |
[[autodoc]] SFTTrainer | |
## set_seed | |
[[autodoc]] set_seed | |