SanjiWatsuki commited on
Commit
409cd67
1 Parent(s): 5adde63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  <!-- description start -->
10
  ## Description
11
 
12
- This repository hosts **Kunoichi-7B**, an RP model which is also very suitable for general tasks. In both my testing and the benchmarks, Kunoichi is an extremely strong model, keeping the advantages of my previous RP models but gaining more intelligence. Kunoichi scores extremely well on [all benchmarks which correlate closely with ChatBot Arena Elo.](https://www.reddit.com/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/)
13
 
14
  | Model | MT Bench | EQ Bench | MMLU | Logic Test |
15
  |----------------------|----------|----------|---------|-------------|
@@ -27,19 +27,12 @@ This repository hosts **Kunoichi-7B**, an RP model which is also very suitable f
27
  | Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
28
  | Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
29
 
 
 
30
  <!-- description end -->
31
  <!-- prompt-template start -->
32
  ## Prompt template: Custom format, or Alpaca
33
 
34
- ### Custom format:
35
- I found the best SillyTavern results from using the Noromaid template.
36
-
37
- SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). Additionally, here is my [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json)
38
-
39
- Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
40
-
41
- The model is intended to be used with up to an 8k context window. Using a NTK RoPE alpha of 2.6, the model can be used experimentally up to a 16k context window.
42
-
43
  ### Alpaca:
44
  ```
45
  Below is an instruction that describes a task. Write a response that appropriately completes the request.
@@ -48,11 +41,18 @@ Below is an instruction that describes a task. Write a response that appropriate
48
  {prompt}
49
 
50
  ### Response:
51
-
52
  ```
53
 
 
 
 
 
 
 
 
 
54
  ## WTF is Kunoichi-7B?
55
 
56
- Kunoichi-7B is a SLERP merger between my previous RP model, Silicon-Maid-7B, and an unreleased RP model that I had dubbed "Ninja-Maid-7B". This model is the result of me attempting to merge an RP focused model which maintained the strengths of Silicon-Maid-7B but further increased the model's brain power. I sought to increase both MT-Bench and EQ-Bench without losing Silicon Maid's strong ability to follow SillyTavern character cards.
57
 
58
- Ninja-Maid-7B was born from an attempt to turn [jan-hq/stealth-v1.2](https://huggingface.co/jan-hq/stealth-v1.2) into a viable RP model. Although none of the Ninja Maid prototype models developed to a point where I was happy, it turned out to be a strong model to merge. Combined with Silicon-Maid-7B, this appeared to be a strong merger.
 
9
  <!-- description start -->
10
  ## Description
11
 
12
+ This repository hosts **Kunoichi-7B**, an general purpose model capable of RP. In both my testing and the benchmarks, Kunoichi is an extremely strong model, keeping the advantages of my previous RP models but gaining more intelligence. Kunoichi scores extremely well on [all benchmarks which correlate closely with ChatBot Arena Elo.](https://www.reddit.com/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/)
13
 
14
  | Model | MT Bench | EQ Bench | MMLU | Logic Test |
15
  |----------------------|----------|----------|---------|-------------|
 
27
  | Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
28
  | Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
29
 
30
+ The model is intended to be used with up to an 8k context window. Using a NTK RoPE alpha of 2.6, the model can be used experimentally up to a 16k context window.
31
+
32
  <!-- description end -->
33
  <!-- prompt-template start -->
34
  ## Prompt template: Custom format, or Alpaca
35
 
 
 
 
 
 
 
 
 
 
36
  ### Alpaca:
37
  ```
38
  Below is an instruction that describes a task. Write a response that appropriately completes the request.
 
41
  {prompt}
42
 
43
  ### Response:
 
44
  ```
45
 
46
+ ### SillyTavern format:
47
+ I found the best SillyTavern results from using the Noromaid template.
48
+
49
+ SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). Additionally, here is my [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json)
50
+
51
+ Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
52
+
53
+
54
  ## WTF is Kunoichi-7B?
55
 
56
+ Kunoichi-7B is a SLERP merger between my previous RP model, Silicon-Maid-7B, and an unreleased model that I had dubbed "Ninja-7B". This model is the result of me attempting to merge an RP focused model which maintained the strengths of Silicon-Maid-7B but further increased the model's brain power. I sought to increase both MT-Bench and EQ-Bench without losing Silicon Maid's strong ability to follow SillyTavern character cards.
57
 
58
+ Ninja-7B was born from an attempt to turn [jan-hq/stealth-v1.2](https://huggingface.co/jan-hq/stealth-v1.2) into a viable model through mergers. Although none of the Ninja prototype models developed to a point where I was happy, it turned out to be a strong model to merge. Combined with Silicon-Maid-7B, this appeared to be a strong merger.