SOLAR-10.7B-Instruct-v1.0-uncensored

Description

This repo contains GGUF format model files for SOLAR-10.7B-Instruct-v1.0-uncensored.

Files Provided

Name Quant Bits File Size Remark
solar-10.7b-instruct-v1.0-uncensored.IQ3_XXS.gguf IQ3_XXS 3 4.44 GB 3.06 bpw quantization
solar-10.7b-instruct-v1.0-uncensored.IQ3_S.gguf IQ3_S 3 4.69 GB 3.44 bpw quantization
solar-10.7b-instruct-v1.0-uncensored.IQ3_M.gguf IQ3_M 3 4.85 GB 3.66 bpw quantization mix
solar-10.7b-instruct-v1.0-uncensored.Q4_0.gguf Q4_0 4 6.07 GB 3.56G, +0.2166 ppl
solar-10.7b-instruct-v1.0-uncensored.IQ4_NL.gguf IQ4_NL 4 6.14 GB 4.25 bpw non-linear quantization
solar-10.7b-instruct-v1.0-uncensored.Q4_K_M.gguf Q4_K_M 4 6.46 GB 3.80G, +0.0532 ppl
solar-10.7b-instruct-v1.0-uncensored.Q5_K_M.gguf Q5_K_M 5 7.60 GB 4.45G, +0.0122 ppl
solar-10.7b-instruct-v1.0-uncensored.Q6_K.gguf Q6_K 6 8.81 GB 5.15G, +0.0008 ppl
solar-10.7b-instruct-v1.0-uncensored.Q8_0.gguf Q8_0 8 11.40 GB 6.70G, +0.0004 ppl

Parameters

path type architecture rope_theta sliding_win max_pos_embed
w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored llama LlamaForCausalLM 10000.0 null 4096

Original Model Card


license: apache-2.0

upstage/SOLAR-10.7B-Instruct-v1.0 finetuned on unalignment/toxic-dpo-v0.1

Downloads last month
331
GGUF
Model size
10.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including koesn/SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF