BramVanroy
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -26,6 +26,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
26 |
|
27 |
This model is a fine-tuned version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) on a number of synthetic datasets including gpt-3.5-turbo and gpt-4-turbo data, multi- and single turn conversations, and code. The training set consists of around 240M tokens. The model was trained with context length 8192.
|
28 |
|
|
|
|
|
29 |
## Model description
|
30 |
|
31 |
This model is a SFT (chat-tuned) version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B), which in turn is based on Mistral 7B and further pretrained on Dutch data.
|
|
|
26 |
|
27 |
This model is a fine-tuned version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) on a number of synthetic datasets including gpt-3.5-turbo and gpt-4-turbo data, multi- and single turn conversations, and code. The training set consists of around 240M tokens. The model was trained with context length 8192.
|
28 |
|
29 |
+
Note that this model has not been aligned with DPO or other techniques. In practice, it is therefore recommended to use the [DPO variant](https://huggingface.co/BramVanroy/GEITje-ultra-dpo) of this model.
|
30 |
+
|
31 |
## Model description
|
32 |
|
33 |
This model is a SFT (chat-tuned) version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B), which in turn is based on Mistral 7B and further pretrained on Dutch data.
|