TheBloke commited on
Commit
192ead1
1 Parent(s): ed93930

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -11
README.md CHANGED
@@ -48,8 +48,6 @@ tags:
48
 
49
  This repo contains GPTQ model files for [Microsoft's Phi 2](https://huggingface.co/microsoft/phi-2).
50
 
51
- **NOTE** These GPTQs will only work with Transformers, and require `trust_remote_code=True`.
52
-
53
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
54
 
55
  <!-- description end -->
@@ -57,6 +55,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
57
  ## Repositories available
58
 
59
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phi-2-GPTQ)
 
60
  * [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
61
  <!-- repositories-available end -->
62
 
@@ -95,7 +94,7 @@ Multiple quantisation parameters are provided, to allow you to choose the best o
95
 
96
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
97
 
98
- These GPTQs were made with Transformers.
99
 
100
  <details>
101
  <summary>Explanation of GPTQ parameters</summary>
@@ -112,12 +111,12 @@ These GPTQs were made with Transformers.
112
 
113
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
114
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
115
- | main | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
116
- | gptq-4bit-32g-actorder_True | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.98 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
117
- | gptq-8bit--1g-actorder_True | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.05 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
118
- | gptq-8bit-128g-actorder_True | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
119
- | gptq-8bit-32g-actorder_True | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.28 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
120
- | gptq-4bit-64g-actorder_True | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
121
 
122
  <!-- README_GPTQ.md-provided-files end -->
123
 
@@ -193,8 +192,6 @@ Note that using Git with HF repos is strongly discouraged. It will be much slowe
193
 
194
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
195
 
196
- **NOTE** Will only work with `Loader: Transformers`, and requires `trust_remote_code=True`
197
-
198
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
199
 
200
  1. Click the **Model tab**.
 
48
 
49
  This repo contains GPTQ model files for [Microsoft's Phi 2](https://huggingface.co/microsoft/phi-2).
50
 
 
 
51
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
52
 
53
  <!-- description end -->
 
55
  ## Repositories available
56
 
57
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phi-2-GPTQ)
58
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/phi-2-GGUF)
59
  * [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
60
  <!-- repositories-available end -->
61
 
 
94
 
95
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
96
 
97
+ Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
98
 
99
  <details>
100
  <summary>Explanation of GPTQ parameters</summary>
 
111
 
112
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
113
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
114
+ | [main](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
115
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.98 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
116
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.05 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
117
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
118
+ | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.28 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
119
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
120
 
121
  <!-- README_GPTQ.md-provided-files end -->
122
 
 
192
 
193
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
194
 
 
 
195
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
196
 
197
  1. Click the **Model tab**.