Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingfa
|
|
20 |
|
21 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪
|
22 |
|
|
|
|
|
23 |
## 🏆 Evaluation
|
24 |
|
25 |
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
|
|
|
20 |
|
21 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪
|
22 |
|
23 |
+
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralBeagle14-7B-GGUF-Chat) (GGUF Q4_K_M).
|
24 |
+
|
25 |
## 🏆 Evaluation
|
26 |
|
27 |
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
|