--- tags: - roleplay - llama3 - sillytavern language: - en --- GGUF-IQ-Imatrix quants for [jeiku/Average_Normie_l3_v1_8B](https://huggingface.co./jeiku/Average_Normie_l3_v1_8B). > [!WARNING] > Compatible SillyTavern presets [here (simple)](https://huggingface.co./Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here (Virt's)](https://huggingface.co./Virt-io/SillyTavern-Presets).
> Use the latest version of KoboldCpp. **Use the provided presets.**
> This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now. > [!NOTE] > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. **Original model information:** # Average Normie v1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/dvNIj1rSTjBvgs3XJfqXK.png) A model by an average normie for the average normie. This model is a stock merge of the following models: https://huggingface.co./cgato/L3-TheSpice-8b-v0.1.3 https://huggingface.co./Sao10K/L3-Solana-8B-v1 https://huggingface.co./ResplendentAI/Kei_Llama3_8B The final merge then had the following LoRA applied over it: https://huggingface.co./ResplendentAI/Theory_of_Mind_Llama3 This should be an intelligent and adept roleplaying model.