Mistral-7B-Instruct-v0.3-dpo-lora_lr1e-5_5ep
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2423
- Rewards/chosen: -0.5640
- Rewards/rejected: -4.3641
- Rewards/accuracies: 0.8557
- Rewards/margins: 3.8002
- Logps/rejected: -417.8773
- Logps/chosen: -434.6749
- Logits/rejected: 0.0188
- Logits/chosen: 0.1716
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.4274 | 1.0 | 103 | 0.3324 | 0.2097 | -1.9397 | 0.7932 | 2.1493 | -393.6328 | -426.9386 | -0.3230 | -0.0880 |
0.1309 | 2.0 | 206 | 0.2679 | 0.0296 | -2.9700 | 0.8482 | 2.9997 | -403.9364 | -428.7391 | -0.1288 | 0.0588 |
0.0376 | 3.0 | 309 | 0.2491 | -0.4817 | -4.1509 | 0.8452 | 3.6692 | -415.7445 | -433.8520 | -0.0034 | 0.1539 |
0.0158 | 4.0 | 412 | 0.2450 | -0.5678 | -4.3787 | 0.8557 | 3.8110 | -418.0231 | -434.7127 | 0.0183 | 0.1715 |
0.0129 | 5.0 | 515 | 0.2423 | -0.5640 | -4.3641 | 0.8557 | 3.8002 | -417.8773 | -434.6749 | 0.0188 | 0.1716 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 113
Model tree for HachiML/Mistral-7B-Instruct-v0.3-dpo-lora_lr1e-5_5ep
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3