TheBloke commited on
Commit
23ffc38
1 Parent(s): e1dcb08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -26,6 +26,7 @@ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQi
26
  ## Repositories available
27
 
28
  * [4bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ).
 
29
  * [Eric's float16 HF format model for GPU inference and further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b).
30
 
31
  ## EXPERIMENTAL
 
26
  ## Repositories available
27
 
28
  * [4bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ).
29
+ * [3bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-3bit-GPTQ).
30
  * [Eric's float16 HF format model for GPU inference and further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b).
31
 
32
  ## EXPERIMENTAL