File size: 2,668 Bytes
a1dd528
2d7c913
a1dd528
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1098464
a1dd528
1098464
 
 
 
 
 
45cc990
1098464
b55c2ba
8727703
 
 
 
 
85af166
8727703
 
 
 
 
 
 
 
a1dd528
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70fb49e
 
 
 
 
 
 
 
 
 
 
a1dd528
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
license: llama3.1
datasets:
- tohur/natsumura-rp-identity-sharegpt
- tohur/ultrachat_uncensored_sharegpt
- Nopm/Opus_WritingStruct
- ResplendentAI/bluemoon
- tohur/Internal-Knowledge-Map-sharegpt
- felix-ha/tiny-stories
- tdh87/Stories
- tdh87/Just-stories
- tdh87/Just-stories-2
---
# natsumura-storytelling-rp-1.0-llama-3.1-8b
  This is my Storytelling/RP model for my Natsumura series of 8b models. This model is finetuned on storytelling and roleplaying datasets so should be a great model 
  to use for character chatbots in applications such as Sillytavern, Agnai, RisuAI and more. And should be a great model to use for fictional writing. Up to a 128k context.

- **Developed by:** Tohur
- **License:** llama3.1
- **Finetuned from model :** meta-llama/Meta-Llama-3.1-8B-Instruct

  This model is based on meta-llama/Meta-Llama-3.1-8B-Instruct, and is governed by [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
  Natsumura is uncensored, which makes the model compliant.It will be highly compliant with any requests, even unethical ones. 
  You are responsible for any content you create using this model. Please use it responsibly.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/uqJv-R1LeJEfMxi1nmTH5.png)

## Usage

If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co./TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.

## Provided Quants

(sorted by quality.)

| Quant | Notes |
|:-----|:-----|
| Q2_K |
| Q3_K_S |  
| Q3_K_M | lower quality |
| Q3_K_L |  |
| Q4_0 |  |
| Q4_K_S | fast, recommended |
| Q4_K_M | fast, recommended |
| Q5_0 |  |
| Q5_K_S |  |
| Q5_K_M |  |
| Q6_K | very good quality |
| Q8_0 | fast, best quality |
| f16 | 16 bpw, overkill |

# use in ollama
```
ollama pull Tohur/natsumura-storytelling-rp-llama-3.1
```

# Datasets used:
- tohur/natsumura-rp-identity-sharegpt
- tohur/ultrachat_uncensored_sharegpt
- Nopm/Opus_WritingStruct
- ResplendentAI/bluemoon
- tohur/Internal-Knowledge-Map-sharegpt
- felix-ha/tiny-stories
- tdh87/Stories
- tdh87/Just-stories
- tdh87/Just-stories-2

## Inference

I use the following settings for inference:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"top_p": 0.95
"top_k": 40
"min_p": 0.05
```

# Prompt template: ChatML

ChatML is the way to go!
```
<|im_start|>system
You are Natsumura, a helpful AI assistant.<|im_end|>
<|im_start|>user
Tohur: Good day, Natsumura.<|im_end|>
<|im_start|>assistant
Natsumura:
```