TheBloke commited on
Commit
41df108
1 Parent(s): 62f2727

New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -15,20 +15,21 @@ This repo contains GGML files for for CPU inference using [llama.cpp](https://gi
15
  * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/dromedary-65B-lora-GGML)
16
  * [float16 unquantised model for GPU](https://huggingface.co/TheBloke/dromedary-65B-lora-HF)
17
 
18
- ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
19
 
20
- llama.cpp recently made a breaking change to its quantisation methods.
21
 
22
- I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
23
 
24
- The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
25
 
26
  ## Provided files
27
  | Name | Quant method | Bits | Size | RAM required | Use case |
28
  | ---- | ---- | ---- | ---- | ---- | ----- |
29
- `dromedary-lora-65B.ggml.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4-bit. |
30
- `dromedary-lora-65B.ggml.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
31
- `dromedary-lora-65B.ggml.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. |
 
32
 
33
 
34
  # Original Dromedary Model Card
@@ -80,4 +81,4 @@ We use the following configuration for the LoRA weights:
80
  Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning),
81
 
82
  ## Evaluation dataset
83
- We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.
 
15
  * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/dromedary-65B-lora-GGML)
16
  * [float16 unquantised model for GPU](https://huggingface.co/TheBloke/dromedary-65B-lora-HF)
17
 
18
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
19
 
20
+ llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
21
 
22
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
23
 
24
+ For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
25
 
26
  ## Provided files
27
  | Name | Quant method | Bits | Size | RAM required | Use case |
28
  | ---- | ---- | ---- | ---- | ---- | ----- |
29
+ `dromedary-lora-65B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4-bit. |
30
+ `dromedary-lora-65B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 44.9GB | 47GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
31
+ `dromedary-lora-65B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
32
+ `dromedary-lora-65B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. |
33
 
34
 
35
  # Original Dromedary Model Card
 
81
  Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning),
82
 
83
  ## Evaluation dataset
84
+ We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.