BramVanroy
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ language:
|
|
25 |
|
26 |
This model is a fine-tuned version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) on a number of synthetic datasets including gpt-3.5-turbo and gpt-4-turbo data, multi- and single turn conversations, and code. The training set consists of around 240M tokens. The model was trained with context length 8192.
|
27 |
|
28 |
-
**Note that this model has not been aligned with DPO or other techniques. In practice, <u>it is therefore recommended</u> to use the [DPO variant](https://huggingface.co/BramVanroy/GEITje-ultra) of this model.**
|
29 |
|
30 |
|
31 |
## Model description
|
|
|
25 |
|
26 |
This model is a fine-tuned version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) on a number of synthetic datasets including gpt-3.5-turbo and gpt-4-turbo data, multi- and single turn conversations, and code. The training set consists of around 240M tokens. The model was trained with context length 8192.
|
27 |
|
28 |
+
**Note that this model has not been aligned with DPO or other techniques. In practice, <u>it is therefore recommended</u> to use the [DPO variant](https://huggingface.co/BramVanroy/GEITje-7B-ultra) of this model.**
|
29 |
|
30 |
|
31 |
## Model description
|