freefallr commited on
Commit
2ee29c6
1 Parent(s): 777707c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -22,14 +22,14 @@ The original model was created by [jphme](https://huggingface.co/jphme) and is a
22
 
23
  ## Model Sheet
24
 
25
- | **Model Attribute** | **Details** |
26
- |--------------------------|--------------------------------------------------------------------------------------------------------------|
27
- | **Format** | GGUF |
28
- | **Converted with** | llama.cpp (Commit: 9e20231) |
29
- | **Quantization Levels** | 8 Bit<br> 5 Bit K_M <br> 4 Bit K_M |
30
- | **Model** | [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) |
31
- | **Created by** | [jphme](https://huggingface.co/jphme) |
32
- | **Training Data** | Prorietary German Conversation Dataset, German SQuAD, German legal SQuAD data, augmented with "wrong" contexts, to improve factual RAG |
33
 
34
  ## Replication Steps
35
  Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
 
22
 
23
  ## Model Sheet
24
 
25
+ | **Attribute** | **Details** |
26
+ |----------------------------|--------------------------------------------------------------------------------------------------------------|
27
+ | **Model** | [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) |
28
+ | **Format** | GGUF |
29
+ | **Quantization Levels** | 8 Bit<br> 5 Bit K_M |
30
+ | **Conversion Tool used** | llama.cpp (Commit: 9e20231) |
31
+ | **Original Model Creator** | [jphme](https://huggingface.co/jphme) |
32
+ | **Training Data Info** | Prorietary German Conversation Dataset, German SQuAD, German legal SQuAD data, augmented with "wrong" contexts, to improve factual RAG |
33
 
34
  ## Replication Steps
35
  Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.