Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Information

OpenAssistant-Llama-30B-4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.

This was made using Open Assistant's native fine-tune of Llama 30b on their dataset.

What's included

GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations

GGML: 3 quantized versions. One quantized using q4_1, another one was quantized using q5_0, and the last one was quantized using q5_1.

Update 05.27.2023

Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).

Update 04.29.2023

Updated to the latest fine-tune by Open Assistant oasst-sft-7-llama-30b-xor.

GPU/GPTQ Usage

To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.

Oobabooga: If you require further instruction, see here and here

KoboldAI: If you require further instruction, see here

CPU/GGML Usage

To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.

Oobabooga: If you require further instruction, see here

KoboldAI: If you require further instruction, see here

Benchmarks

--true-sequential --act-order

Wikitext2: 4.964076519012451

Ptb-New: 9.641128540039062

C4-New: 7.203001022338867

Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.

--true-sequential --groupsize 128

Wikitext2: 4.641914367675781

Ptb-New: 9.117929458618164

C4-New: 6.867942810058594

Note: This version uses --groupsize 128, resulting in better evaluations. However, it consumes more VRAM.

Downloads last month
43
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.