GGUF files of princeton-nlp/gemma-2-9b-it-SimPO.
Files quantized with larger embed and output weights than normal GGUF setting
Q8_0 embed and output weights: Q6_K_L, Q5_K_L, Q4_K_L
bf16 embed and output weights (maybe slower inference): Q8_0_L, Q6_K_XL, Q5_K_XL, Q4_K_XL
- Downloads last month
- 256
Model tree for pipihand01/gemma-2-9b-it-SimPO-GGUF
Base model
google/gemma-2-9b
Finetuned
google/gemma-2-9b-it
Finetuned
princeton-nlp/gemma-2-9b-it-SimPO