--- base_model: unsloth/zephyr-sft-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - dpo --- TrainOutput(global_step=252, training_loss=0.0682461416144461, metrics={'train_runtime': 2060.2317, 'train_samples_per_second': 1.953, 'train_steps_per_second': 0.122, 'total_flos': 0.0, 'train_loss': 0.0682461416144461, 'epoch': 0.5009940357852882}) ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 8,046 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 4 | Gradient Accumulation steps = 4 \ / Total batch size = 16 | Total steps = 252 "-____-" Number of trainable parameters = 41,943,040 # Uploaded model - **Developed by:** sonthenguyen - **License:** apache-2.0 - **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)