TheBloke's picture
New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
41df108
|
raw
history blame
3.46 kB
metadata
license: other
inference: false

Dromedary-65B-LoRA GGML

These files are the result of merging the delta weights of IBM's Dromedary 65B LoRA with the original Llama 65B model.

This repo contains GGML files for for CPU inference using llama.cpp.

Repositories available

THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit 2d5db48 or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch previous_llama_ggmlv2.

Provided files

Name Quant method Bits Size RAM required Use case
dromedary-lora-65B.ggmlv3.q4_0.bin q4_0 4bit 40.8GB 43GB 4-bit.
dromedary-lora-65B.ggmlv3.q4_1.bin q4_1 4bit 44.9GB 47GB 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
dromedary-lora-65B.ggmlv3.q5_0.bin q5_0 5bit 44.9GB 47GB 5-bit. Higher accuracy, higher resource usage and slower inference.
dromedary-lora-65B.ggmlv3.q5_1.bin q5_1 5bit 49GB 51GB 5-bit. Even higher accuracy, higher resource usage and slower inference.

Original Dromedary Model Card

See https://github.com/IBM/Dromedary#model-weights for instructions.

Model details

Dromedary Logo

Model type: Dromedary is an open-source self-aligned language model trained with minimal human supervision. The base language model is LLaMA-65b, based on the transformer architecture.

Model date: Dromedary was trained between April 2023 and May 2023, but its knowledge only goes up until Sept-2021.

Organizations developing the model: The Dromedary team as a joint effort between CMU and IBM.

Paper or resources for more information: https://mitibmdemos.draco.res.ibm.com/dromedary

License: LLaMA's Non-commercial bespoke license

Where to send questions or comments about the model: https://github.com/IBM/Dromedary/issues

Intended use

Primary intended uses: The primary use of Dromedary is research on the alignment of large language models.

Primary intended users: The primary intended users of the model are researchers in artificial intelligence.

Delta weights

We use the following configuration for the LoRA weights:

--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \

Training dataset

Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning),

Evaluation dataset

We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.