--- library_name: transformers license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - trl - dpo - generated_from_trainer model-index: - name: Llama0-3-8b-v0.1-p-2-lr5e-6-e1 results: [] --- # Llama0-3-8b-v0.1-p-2-lr5e-6-e1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0071 - Rewards/chosen: -325.0106 - Rewards/rejected: -326.4964 - Rewards/accuracies: 0.4839 - Rewards/margins: 1.4858 - Logps/rejected: -32736.4023 - Logps/chosen: -32589.3574 - Logits/rejected: 15.2095 - Logits/chosen: 15.1707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0157 | 0.2137 | 100 | 0.0113 | -210.0611 | -211.0992 | 0.4597 | 1.0381 | -21196.6719 | -21094.4004 | 13.2797 | 13.2492 | | 0.0103 | 0.4275 | 200 | 0.0100 | -295.4855 | -293.2994 | 0.5081 | -2.1861 | -29416.6973 | -29636.8457 | 14.5637 | 14.5474 | | 0.0074 | 0.6412 | 300 | 0.0081 | -286.9739 | -284.4362 | 0.4718 | -2.5377 | -28530.3730 | -28785.6816 | 14.5532 | 14.6383 | | 0.0061 | 0.8549 | 400 | 0.0073 | -253.9528 | -254.9697 | 0.5040 | 1.0169 | -25583.7266 | -25483.5762 | 13.4697 | 13.4536 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.20.0