freefallr commited on
Commit
363d7a4
1 Parent(s): e86ad5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -30,10 +30,11 @@ This model was created by [jphme](https://huggingface.co/jphme) and is a fine-tu
30
  This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format.
31
 
32
  ## Replication Steps
33
- Clone llama.cpp *(Commit: 9e20231)*, compile it and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
34
  ```
35
- # Convert original model to FP16 GGUF format
36
  python3 llama.cpp/convert.py ./original-models/Llama-2-13b-chat-german --outtype f16 --outfile ./converted_gguf/Llama-2-13b-chat-german-GGUF.fp16.bin
 
37
  # Quantize FP16 GGUF to 8, 5_K_M and 4_K_M bit
38
  ./llama.cpp/quantize Llama-2-13b-chat-german-GGUF.fp16.bin Llama-2-13b-chat-german-GGUF.q8_0.bin q8_0
39
  ./llama.cpp/quantize Llama-2-13b-chat-german-GGUF.fp16.bin Llama-2-13b-chat-german-GGUF.q8_0.bin q5_K_M
 
30
  This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format.
31
 
32
  ## Replication Steps
33
+ Clone and install llama.cpp *(Commit: 9e20231)* and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
34
  ```
35
+ # Convert original model to GGUF format with FP16 precision
36
  python3 llama.cpp/convert.py ./original-models/Llama-2-13b-chat-german --outtype f16 --outfile ./converted_gguf/Llama-2-13b-chat-german-GGUF.fp16.bin
37
+
38
  # Quantize FP16 GGUF to 8, 5_K_M and 4_K_M bit
39
  ./llama.cpp/quantize Llama-2-13b-chat-german-GGUF.fp16.bin Llama-2-13b-chat-german-GGUF.q8_0.bin q8_0
40
  ./llama.cpp/quantize Llama-2-13b-chat-german-GGUF.fp16.bin Llama-2-13b-chat-german-GGUF.q8_0.bin q5_K_M