Information
Guanaco 33b working with Oobabooga's Text Generation Webui and KoboldAI.This is a quantized version of Tim Dettmers' Guanaco 33b.
What's included
GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations.
GGML: 3 quantized versions. One quantized using q4_1, another was quantized using q5_0, and the last one was quantized using q5_1.
GPU/GPTQ Usage
To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.
Oobabooga: If you require further instruction, see here and here
KoboldAI: If you require further instruction, see here
CPU/GGML Usage
To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.
Oobabooga: If you require further instruction, see here
KoboldAI: If you require further instruction, see here
Benchmarks
--true-sequential --act-order
Wikitext2: 4.582493305206299
Ptb-New: 8.697775840759277
C4-New: 6.67733097076416
Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
--true-sequential --groupsize 128
Wikitext2: 4.369843006134033
Ptb-New: 8.53034496307373
C4-New: 6.496636390686035
Note: This version uses --groupsize 128, resulting in better evaluations. However, it consumes more VRAM.
- Downloads last month
- 13