---
license: apache-2.0
datasets:
- AuriAetherwiing/Allura
- kalomaze/Opus_Instruct_25k
base_model:
- AuriAetherwiing/Yi-1.5-9B-32K-tokfix
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/EVA-Yi-1.5-9B-32K-V1-GGUF
This is quantized version of [EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1](https://huggingface.co./EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1) created using llama.cpp
# Original Model Card
**EVA Yi 1.5 9B v1**
A RP/storywriting focused model, full-parameter finetune of Yi-1.5-9B-32K on mixture of synthetic and natural data.
A continuation of nothingiisreal's Celeste 1.x series, made to improve stability and versatility, without losing unique, diverse writing style of Celeste.
Quants: (GGUF is not recommended, lcpp breaks tokenizer fix)
We recommend using original BFloat16 weights, quantization seems to affect Yi significantly more than other model architectures.
Prompt format is ChatML.
Recommended sampler values:
- Temperature: 1
- Min-P: 0.05
Recommended SillyTavern presets (via CalamitousFelicitousness):
- [Context](https://huggingface.co./EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co./EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
Training data:
- Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
- Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
Hardware used:
Model was trained by Kearm and Auri.
Special thanks:
- to Lemmy, Gryphe, Kalomaze and Nopm for the data
- to ALK, Fizz and CalamitousFelicitousness for Yi tokenizer fix
- and to InfermaticAI's community for their continued support for our endeavors