--- license: apache-2.0 base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ tags: - trl - dpo - generated_from_trainer model-index: - name: mistral-dpo results: [] pipeline_tag: conversational --- # mistral-dpo This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co./TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0559 - Rewards/chosen: -0.6622 - Rewards/rejected: -5.8356 - Rewards/accuracies: 1.0 - Rewards/margins: 5.1735 - Logps/rejected: -138.0126 - Logps/chosen: -105.3292 - Logits/rejected: -2.5356 - Logits/chosen: -2.7185 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6666 | 0.01 | 10 | 0.5490 | 0.3763 | 0.0083 | 1.0 | 0.3680 | -79.5733 | -94.9446 | -2.6386 | -2.7333 | | 0.439 | 0.01 | 20 | 0.2792 | 1.0686 | -0.2159 | 1.0 | 1.2845 | -81.8148 | -88.0209 | -2.6245 | -2.7868 | | 0.1683 | 0.02 | 30 | 0.1116 | 1.0530 | -2.2150 | 1.0 | 3.2680 | -101.8059 | -88.1772 | -2.6157 | -2.7924 | | 0.54 | 0.03 | 40 | 0.0719 | -0.1064 | -4.6952 | 1.0 | 4.5888 | -126.6084 | -99.7713 | -2.5649 | -2.7384 | | 0.0965 | 0.03 | 50 | 0.0559 | -0.6622 | -5.8356 | 1.0 | 5.1735 | -138.0126 | -105.3292 | -2.5356 | -2.7185 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.0