--- library_name: transformers license: apache-2.0 base_model: tsavage68/IE_M2_1000steps_1e7rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: IE_M2_350steps_1e8rate_03beta_cSFTDPO results: [] --- # IE_M2_350steps_1e8rate_03beta_cSFTDPO This model is a fine-tuned version of [tsavage68/IE_M2_1000steps_1e7rate_SFT](https://huggingface.co./tsavage68/IE_M2_1000steps_1e7rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6746 - Rewards/chosen: -0.0013 - Rewards/rejected: -0.0404 - Rewards/accuracies: 0.3600 - Rewards/margins: 0.0391 - Logps/rejected: -41.1564 - Logps/chosen: -42.2098 - Logits/rejected: -2.9159 - Logits/chosen: -2.8545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 350 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6998 | 0.4 | 50 | 0.6949 | 0.0058 | 0.0085 | 0.2050 | -0.0028 | -40.9934 | -42.1863 | -2.9160 | -2.8547 | | 0.6925 | 0.8 | 100 | 0.6906 | 0.0017 | -0.0041 | 0.2600 | 0.0059 | -41.0355 | -42.1997 | -2.9159 | -2.8546 | | 0.679 | 1.2 | 150 | 0.6779 | 0.0047 | -0.0273 | 0.3650 | 0.0320 | -41.1127 | -42.1899 | -2.9158 | -2.8546 | | 0.6715 | 1.6 | 200 | 0.6747 | 0.0020 | -0.0367 | 0.3900 | 0.0387 | -41.1442 | -42.1988 | -2.9156 | -2.8544 | | 0.6764 | 2.0 | 250 | 0.6736 | -0.0012 | -0.0419 | 0.3850 | 0.0407 | -41.1614 | -42.2094 | -2.9156 | -2.8543 | | 0.6842 | 2.4 | 300 | 0.6763 | -0.0024 | -0.0380 | 0.3500 | 0.0355 | -41.1483 | -42.2137 | -2.9159 | -2.8545 | | 0.6712 | 2.8 | 350 | 0.6746 | -0.0013 | -0.0404 | 0.3600 | 0.0391 | -41.1564 | -42.2098 | -2.9159 | -2.8545 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.0.0+cu117 - Datasets 3.0.0 - Tokenizers 0.19.1