Exllama v2 Quantizations of L3.1-8B-Celeste-V1.5
Using turboderp's ExLlamaV2 v0.1.8 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co./nothingiisreal/L3.1-8B-Celeste-V1.5
Download instructions
With git:
git clone --single-branch --branch 6_5 https://huggingface.co./bartowski/L3.1-8B-Celeste-V1.5-exl2
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download the main
(only useful if you only care about measurement.json) branch to a folder called L3.1-8B-Celeste-V1.5-exl2
:
mkdir L3.1-8B-Celeste-V1.5-exl2
huggingface-cli download bartowski/L3.1-8B-Celeste-V1.5-exl2 --local-dir L3.1-8B-Celeste-V1.5-exl2
To download from a different branch, add the --revision
parameter:
Linux:
mkdir L3.1-8B-Celeste-V1.5-exl2-6_5
huggingface-cli download bartowski/L3.1-8B-Celeste-V1.5-exl2 --revision 6_5 --local-dir L3.1-8B-Celeste-V1.5-exl2-6_5
Windows (which apparently doesn't like _ in folders sometimes?):
mkdir L3.1-8B-Celeste-V1.5-exl2-6.5
huggingface-cli download bartowski/L3.1-8B-Celeste-V1.5-exl2 --revision 6_5 --local-dir L3.1-8B-Celeste-V1.5-exl2-6.5