About ORPO
Collection
Contains some information and experiments fine-tuning LLMs using 🤗 `trl.ORPOTrainer`
•
8 items
•
Updated
•
5
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9159 | 1.0 | 105 | 0.8794 | -0.0421 | -0.0499 | 0.6302 | 0.0078 | -0.9975 | -0.8413 | -2.8931 | -2.8875 | 0.8561 | -0.6429 | 0.3024 |
0.8397 | 2.0 | 211 | 0.8612 | -0.0404 | -0.0495 | 0.6458 | 0.0092 | -0.9902 | -0.8071 | -2.8882 | -2.8794 | 0.8366 | -0.6257 | 0.3555 |
0.7808 | 2.99 | 315 | 0.8648 | -0.0405 | -0.0502 | 0.6458 | 0.0097 | -1.0036 | -0.8096 | -2.9146 | -2.9040 | 0.8392 | -0.6215 | 0.3802 |
Base model
mistralai/Mistral-7B-v0.1