test
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ I am working on improving the model´s capabilities and will update the model if
|
|
24 |
A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/Llama-2-13b-chat-german-GGML).
|
25 |
|
26 |
Please note the license of the base model, which is contained in the repo under LICENSE.TXT and see the original model card below for more information.
|
27 |
-
|
28 |
## Prompt Template
|
29 |
|
30 |
Llama2 Chat uses a new prompt format:
|
|
|
24 |
A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/Llama-2-13b-chat-german-GGML).
|
25 |
|
26 |
Please note the license of the base model, which is contained in the repo under LICENSE.TXT and see the original model card below for more information.
|
27 |
+
|
28 |
## Prompt Template
|
29 |
|
30 |
Llama2 Chat uses a new prompt format:
|