Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,10 @@ datasets:
|
|
16 |
---
|
17 |
# Llama 2 13b Chat German - GGUF
|
18 |
|
|
|
|
|
|
|
|
|
19 |
## Model Sheet
|
20 |
|
21 |
| **Model Attribute** | **Details** |
|
@@ -26,8 +30,6 @@ datasets:
|
|
26 |
| **Model** | [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) |
|
27 |
| **Created by** | [jphme](https://huggingface.co/jphme) |
|
28 |
|
29 |
-
This model was created by [jphme](https://huggingface.co/jphme) and is a fine-tuned LLM based on [Llama2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) from Meta.
|
30 |
-
This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format.
|
31 |
|
32 |
## Replication Steps
|
33 |
Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
|
|
|
16 |
---
|
17 |
# Llama 2 13b Chat German - GGUF
|
18 |
|
19 |
+
This repository contains [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format.
|
20 |
+
The original model created by [jphme](https://huggingface.co/jphme) and is a fine-tune of [Llama2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) from Meta, focused on German instructions (helpful for RAG).
|
21 |
+
|
22 |
+
|
23 |
## Model Sheet
|
24 |
|
25 |
| **Model Attribute** | **Details** |
|
|
|
30 |
| **Model** | [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) |
|
31 |
| **Created by** | [jphme](https://huggingface.co/jphme) |
|
32 |
|
|
|
|
|
33 |
|
34 |
## Replication Steps
|
35 |
Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
|