Update README.md
Browse files
README.md
CHANGED
@@ -20,9 +20,11 @@ NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingfa
|
|
20 |
|
21 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). πͺ
|
22 |
|
23 |
-
## π
|
24 |
|
25 |
-
This model uses a context window of 8k.
|
|
|
|
|
26 |
|
27 |
## π Evaluation
|
28 |
|
|
|
20 |
|
21 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). πͺ
|
22 |
|
23 |
+
## π Applications
|
24 |
|
25 |
+
This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template.
|
26 |
+
|
27 |
+
Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.
|
28 |
|
29 |
## π Evaluation
|
30 |
|