TheBloke commited on
Commit
f630015
1 Parent(s): 58973b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -27
README.md CHANGED
@@ -19,41 +19,27 @@ Eric did a fresh 7B training using the WizardLM method, on [a dataset edited to
19
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML)
20
  * [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
21
 
22
- ## Provided files
23
- | Name | Quant method | Bits | Size | RAM required | Use case |
24
- | ---- | ---- | ---- | ---- | ---- | ----- |
25
- `WizardLM-7B-uncensored.q4_0.bin` | q4_0 | 4bit | 4.2GB | 6GB | Maximum compatibility |
26
- `WizardLM-7B-uncensored.q4_2.bin` | q4_2 | 4bit | 4.2GB | 6GB | Best compromise between resources, speed and quality |
27
- `WizardLM-7B-uncensored.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
28
- `WizardLM-7B-uncensored.q5_1.bin` | q5_1 | 5bit | 5.0GB | 7GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
29
-
30
- * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
31
- * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
32
- * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
33
- * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
34
-
35
- ## q4_2 compatibility
36
-
37
- q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
38
 
39
- In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
40
 
41
- If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
42
 
43
- If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
44
 
45
- ## q5_0 and q5_1 compatibility
46
-
47
- These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
48
-
49
- Don't expect any third-party UIs/tools to support them yet.
 
50
 
51
  ## How to run in `llama.cpp`
52
 
53
  I use the following command line; adjust for your tastes and needs:
54
 
55
  ```
56
- ./main -t 12 -m WizardLM-7B-uncensored.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
57
  ### Instruction:
58
  Write a story about llamas
59
  ### Response:"
@@ -66,9 +52,9 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
66
 
67
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
68
 
69
- Note: at this time text-generation-webui will not support the new q5 quantisation methods.
70
 
71
- **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
72
 
73
  # Eric's original model card
74
 
 
19
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML)
20
  * [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
21
 
22
+ ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ llama.cpp recently made a breaking change to its quantisation methods.
25
 
26
+ I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
27
 
28
+ The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
29
 
30
+ ## Provided files
31
+ | Name | Quant method | Bits | Size | RAM required | Use case |
32
+ | ---- | ---- | ---- | ---- | ---- | ----- |
33
+ `WizardLM-7B-uncensored.q4_0.bin` | q4_0 | 4bit | 4.2GB | 6GB | 4-bit. |
34
+ `WizardLM-7B-uncensored.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
35
+ `WizardLM-7B-uncensored.q5_1.bin` | q5_1 | 5bit | 5.0GB | 7GB | 5-bit. Even higher accuracy, resource usage and slower inference.|
36
 
37
  ## How to run in `llama.cpp`
38
 
39
  I use the following command line; adjust for your tastes and needs:
40
 
41
  ```
42
+ ./main -t 12 -m WizardLM-7B-uncensored.ggml.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
43
  ### Instruction:
44
  Write a story about llamas
45
  ### Response:"
 
52
 
53
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
54
 
55
+ Note: at this time text-generation-webui will likely not support the updated llama.cpp quantisation methods.
56
 
57
+ **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) which may help you to update text-gen-ui so it can use the more recent quantisation methods.
58
 
59
  # Eric's original model card
60