TheBloke commited on
Commit
669f82a
1 Parent(s): 75d8e25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -13,6 +13,7 @@ This repo contains GGML files for for CPU inference using [llama.cpp](https://gi
13
 
14
  * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/dromedary-65B-lora-GPTQ)
15
  * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/dromedary-65B-lora-GGML)
 
16
 
17
  ## Provided files
18
  | Name | Quant method | Bits | Size | RAM required | Use case |
 
13
 
14
  * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/dromedary-65B-lora-GPTQ)
15
  * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/dromedary-65B-lora-GGML)
16
+ * [float16 unquantised model for GPU](https://huggingface.co/TheBloke/dromedary-65B-lora-HF)
17
 
18
  ## Provided files
19
  | Name | Quant method | Bits | Size | RAM required | Use case |