--- license: other inference: false --- ## Dromedary-65B-LoRA GGML These files are the result of merging the [delta weights of IBM's Dromedary 65B LoRA](https://huggingface.co./zhiqings/dromedary-65b-lora-delta-v0) with the original Llama 65B model. This repo contains GGML files for for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co./TheBloke/dromedary-65B-lora-GPTQ) * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co./TheBloke/dromedary-65B-lora-GGML) ## Provided files | Name | Quant method | Bits | Size | RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | `dromedary-lora-65B.ggml.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | Maximum compatibility | `dromedary-lora-65B.ggml.q4_2.bin` | q4_2 | 4bit | 40.8GB | 43GB | Best compromise between resources, speed and quality | `dromedary-lora-65B.ggml.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. | `dromedary-lora-65B.ggml.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | Brand new 5bit method. Slightly higher resource usage than q5_0.| * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below. * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0. * The q5_1 file is using brand new 5bit method released 26 # Original Dromedary Model Card See https://github.com/IBM/Dromedary#model-weights for instructions. ## Model details