--- pipeline_tag: text-generation tags: - orca - orca2 - microsoft license: other license_name: microsoft-research-license license_link: LICENSE quantized_by: bartowski --- ## Exllama v2 Quantizations of Orca-2-13B-16k Using turboderp's ExLlamaV2 v0.0.9 for quantization. Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset. Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. Original model: https://huggingface.co./NurtureAI/Orca-2-13B-16k 4.0 bits per weight 5.0 bits per weight 6.0 bits per weight 8.0 bits per weight ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co./bartowski/Orca-2-13B-16k-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Orca-2-13B-16k-exl2`: ```shell mkdir Orca-2-13B-16k-exl2 huggingface-cli download bartowski/Orca-2-13B-16k-exl2 --local-dir Orca-2-13B-16k-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Orca-2-13B-16k-exl2 huggingface-cli download bartowski/Orca-2-13B-16k-exl2 --revision 4_0 --local-dir Orca-2-13B-16k-exl2 --local-dir-use-symlinks False ```