Lewdiculous's picture
Create README.md
161d9d2 verified
|
raw
history blame
1.36 kB
metadata
tags:
  - roleplay
  - llama3
  - sillytavern
language:
  - en

GGUF-IQ-Imatrix quants for ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B.

Compatible SillyTavern presets here (simple) or here (Virt's).
Use the latest version of KoboldCpp. Use the provided presets.
This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.

For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.

Original model information:

Average Normie v1

image/png

A model by an average normie for the average normie.

This model is a stock merge of the following models:

https://huggingface.co./cgato/L3-TheSpice-8b-v0.1.3

https://huggingface.co./Sao10K/L3-Solana-8B-v1

https://huggingface.co./ResplendentAI/Kei_Llama3_8B

The final merge then had the following LoRA applied over it:

https://huggingface.co./ResplendentAI/Theory_of_Mind_Llama3

This should be an intelligent and adept roleplaying model.