TheBloke commited on
Commit
47b771d
1 Parent(s): 296586e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,11 +22,11 @@ These files are GPTQ 4bit model files for [Tim Dettmers' Guanaco 33B](https://hu
22
 
23
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
24
 
25
- ## Other repositories available
26
 
27
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GPTQ)
28
- * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-33B-GGML)
29
- * [Original unquantised fp16 model in HF format](https://huggingface.co/timdettmers/guanaco-33b-merged)
30
 
31
  ## Prompt template
32
 
 
22
 
23
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
24
 
25
+ ## Repositories available
26
 
27
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GPTQ)
28
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GGML)
29
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-33b-merged)
30
 
31
  ## Prompt template
32