--- license: apache-2.0 library_name: transformers base_model: - nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated datasets: - jondurbin/truthy-dpo-v0.1 - kyujinpy/orca_math_dpo - antiven0m/physical-reasoning-dpo --- ![image/png](https://huggingface.co./nbeerbower/bophades-mistral-7B/resolve/main/bophades.png) # mistral-nemo-bophades3-12B [Mahou-1.5-mistral-nemo-12B-lorablated](https://huggingface.co./nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) finetuned on [jondurbin/truthy-dpo-v0.1](https://huggingface.co./datasets/jondurbin/truthy-dpo-v0.1), [kyujinpy/orca_math_dpo](https://huggingface.co./datasets/kyujinpy/orca_math_dpo), and [antiven0m/physical-reasoning-dpo](https://huggingface.co./datasets/antiven0m/physical-reasoning-dpo). ### Method [ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs.