TheBloke commited on
Commit
ca911d3
1 Parent(s): 70b698b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -51,17 +51,20 @@ This requires text-generation-webui version of commit `204731952ae59d79ea3805a42
51
 
52
  So please first update text-generation-webui to the latest version.
53
 
54
- ### How to download and use this model in text-generation-webui
55
 
56
  1. Launch text-generation-webui
57
  2. Click the **Model tab**.
58
- 3. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ`.
59
- 4. Click **Download**.
60
- 5. Wait until it says it's finished downloading.
61
- 6. Tick **Trust Remote Code**
62
  7. Click the **Refresh** icon next to **Model** in the top left.
63
  8. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-Falcon-40B-GPTQ`.
64
- 9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
 
 
 
65
 
66
  ## Python inference
67
 
 
51
 
52
  So please first update text-generation-webui to the latest version.
53
 
54
+ ## How to download and use this model in text-generation-webui
55
 
56
  1. Launch text-generation-webui
57
  2. Click the **Model tab**.
58
+ 3. Untick **Autoload model**
59
+ 4. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ`.
60
+ 5. Click **Download**.
61
+ 6. Wait until it says it's finished downloading.
62
  7. Click the **Refresh** icon next to **Model** in the top left.
63
  8. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-Falcon-40B-GPTQ`.
64
+ 9. Make sure **Loader** is set to **AutoGPTQ**. This model will not work with ExLlama or GPTQ-for-LLaMa.
65
+ 10. Tick **Trust Remote Code**, followed by **Save Settings**
66
+ 11. Make sure Click **Reload**.
67
+ 12. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
68
 
69
  ## Python inference
70