File size: 2,983 Bytes
a1dd528 2d7c913 a1dd528 17dabc5 a1dd528 1098464 a1dd528 1098464 45cc990 1098464 b55c2ba 8727703 85af166 8727703 a1dd528 acdbfd0 70fb49e b7f18e8 a1dd528 b7f18e8 a1dd528 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
license: llama3.1
datasets:
- tohur/natsumura-rp-identity-sharegpt
- tohur/ultrachat_uncensored_sharegpt
- Nopm/Opus_WritingStruct
- ResplendentAI/bluemoon
- tohur/Internal-Knowledge-Map-sharegpt
- felix-ha/tiny-stories
- tdh87/Stories
- tdh87/Just-stories
- tdh87/Just-stories-2
---
# natsumura-storytelling-rp-1.0-llama-3.1-8b-GGUF
This is my Storytelling/RP model for my Natsumura series of 8b models. This model is finetuned on storytelling and roleplaying datasets so should be a great model
to use for character chatbots in applications such as Sillytavern, Agnai, RisuAI and more. And should be a great model to use for fictional writing. Up to a 128k context.
- **Developed by:** Tohur
- **License:** llama3.1
- **Finetuned from model :** meta-llama/Meta-Llama-3.1-8B-Instruct
This model is based on meta-llama/Meta-Llama-3.1-8B-Instruct, and is governed by [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Natsumura is uncensored, which makes the model compliant.It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by quality.)
| Quant | Notes |
|:-----|:-----|
| Q2_K |
| Q3_K_S |
| Q3_K_M | lower quality |
| Q3_K_L | |
| Q4_0 | |
| Q4_K_S | fast, recommended |
| Q4_K_M | fast, recommended |
| Q5_0 | |
| Q5_K_S | |
| Q5_K_M | |
| Q6_K | very good quality |
| Q8_0 | fast, best quality |
| f16 | 16 bpw, overkill |
# use in ollama
```
ollama pull Tohur/natsumura-storytelling-rp-llama-3.1
```
# Datasets used:
- tohur/natsumura-rp-identity-sharegpt
- tohur/ultrachat_uncensored_sharegpt
- Nopm/Opus_WritingStruct
- ResplendentAI/bluemoon
- tohur/Internal-Knowledge-Map-sharegpt
- felix-ha/tiny-stories
- tdh87/Stories
- tdh87/Just-stories
- tdh87/Just-stories-2
The following parameters were used in [Llama Factory](https://github.com/hiyouga/LLaMA-Factory) during training:
- per_device_train_batch_size=2
- gradient_accumulation_steps=4
- lr_scheduler_type="cosine"
- logging_steps=10
- warmup_ratio=0.1
- save_steps=1000
- learning_rate=2e-5
- num_train_epochs=3.0
- max_samples=500
- max_grad_norm=1.0
- quantization_bit=4
- loraplus_lr_ratio=16.0
- fp16=True
## Inference
I use the following settings for inference:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"top_p": 0.95
"top_k": 40
"min_p": 0.05
```
# Prompt template: llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
``` |