Update README.md
Browse files
README.md
CHANGED
@@ -22,14 +22,14 @@ The original model was created by [jphme](https://huggingface.co/jphme) and is a
|
|
22 |
|
23 |
## Model Sheet
|
24 |
|
25 |
-
| **
|
26 |
-
|
27 |
-
| **
|
28 |
-
| **
|
29 |
-
| **Quantization Levels**
|
30 |
-
| **
|
31 |
-
| **
|
32 |
-
| **Training Data**
|
33 |
|
34 |
## Replication Steps
|
35 |
Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
|
|
|
22 |
|
23 |
## Model Sheet
|
24 |
|
25 |
+
| **Attribute** | **Details** |
|
26 |
+
|----------------------------|--------------------------------------------------------------------------------------------------------------|
|
27 |
+
| **Model** | [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) |
|
28 |
+
| **Format** | GGUF |
|
29 |
+
| **Quantization Levels** | 8 Bit<br> 5 Bit K_M |
|
30 |
+
| **Conversion Tool used** | llama.cpp (Commit: 9e20231) |
|
31 |
+
| **Original Model Creator** | [jphme](https://huggingface.co/jphme) |
|
32 |
+
| **Training Data Info** | Prorietary German Conversation Dataset, German SQuAD, German legal SQuAD data, augmented with "wrong" contexts, to improve factual RAG |
|
33 |
|
34 |
## Replication Steps
|
35 |
Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
|