File size: 796 Bytes
b528900
 
f685738
 
 
 
 
 
b528900
f685738
b528900
f685738
b528900
f685738
b528900
f685738
b528900
f685738
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
library_name: transformers
base_model:
- axolotl-ai-co/romulus-mistral-nemo-12b-simpo
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
license: apache-2.0
---
![image/png](https://huggingface.co./nbeerbower/Mistral-Small-Gutenberg-Doppel-22B/resolve/main/doppel-header?download=true)

# Mistral-Nemo-Gutenberg-Doppel-12B

[axolotl-ai-co/romulus-mistral-nemo-12b-simpo](https://huggingface.co./axolotl-ai-co/romulus-mistral-nemo-12b-simpo) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co./datasets/jondurbin/gutenberg-dpo-v0.1) and [nbeerbower/gutenberg2-dpo](https://huggingface.co./datasets/nbeerbower/gutenberg2-dpo).

### Method

[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with a 2x A100 for 3 epochs.