Update README.md
Browse files
README.md
CHANGED
@@ -29,18 +29,18 @@ This model works best with the following prompt template:
|
|
29 |
Load text-generation-webui as you normally do.
|
30 |
|
31 |
1. Click the **Model tab**.
|
32 |
-
2. Under **Download custom model or LoRA**, enter
|
33 |
3. Click **Download**.
|
34 |
4. Wait until it says it's finished downloading.
|
35 |
5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
|
36 |
6. Now click the **Refresh** icon next to **Model** in the top left.
|
37 |
-
7. In the **Model drop-down**: choose
|
38 |
8. Click **Reload the Model** in the top right.
|
39 |
9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
|
40 |
|
41 |
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
42 |
|
43 |
-
If you're installing manually, olease read the Provided Files section below. You should use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
|
44 |
|
45 |
If you're using a text-generation-webui one click installer, you MUST use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`.
|
46 |
|
|
|
29 |
Load text-generation-webui as you normally do.
|
30 |
|
31 |
1. Click the **Model tab**.
|
32 |
+
2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/stable-vicuna-13B-GPTQ`.
|
33 |
3. Click **Download**.
|
34 |
4. Wait until it says it's finished downloading.
|
35 |
5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
|
36 |
6. Now click the **Refresh** icon next to **Model** in the top left.
|
37 |
+
7. In the **Model drop-down**: choose this model: `stable-vicuna-13B-GPTQ`.
|
38 |
8. Click **Reload the Model** in the top right.
|
39 |
9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
|
40 |
|
41 |
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
42 |
|
43 |
+
If you're installing the model files manually, olease read the Provided Files section below. You should use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
|
44 |
|
45 |
If you're using a text-generation-webui one click installer, you MUST use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`.
|
46 |
|