MarinaraSpaghetti commited on
Commit
4ec31e9
1 Parent(s): 49a87ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -3
README.md CHANGED
@@ -1,3 +1,98 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ ---
8
+
9
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/zYBXSewLbIxWHZdB3oEHs.jpeg)
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/eRwPcd9Ox03hn_WRnsotj.png)
12
+
13
+ # Information
14
+ ## Details
15
+
16
+ Okay, I tried really hard to improve my ChatML merges, but that has gone terribly wrong. Everyone is adding special tokens with different IDs so can't even make a proper union tokenizer for them, damn. Not to mention, I made some... interesting discoveres in regards to some models' context lenghts. You can watch the breakdown of how it went down here: https://www.captiongenerator.com/v/2303039/marinaraspaghetti's-merging-experience.
17
+
18
+ This one feels a bit different to my previous attempts and seems less prone to repetition, especially on higher contexts, which is great for me! I'll probably improve on it even further, but for now, it feels rather nice. Great for RP and storytelling. All credits and thanks go to the amazing MistralAI, Intervitens, Sao10K and Nbeerbower for their amazing models! Plus, special shoutouts to Parasitic Rogue for ideas and Prodeus Unity for cool exl2 quants of my previous merges. Have a good one, everyone.
19
+
20
+ ## Instruct
21
+
22
+ ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/JtOSIRNnMdGNycWACobO2.gif)
23
+
24
+ *Sigh,* Mistral Instruct, I'm afraid.
25
+
26
+ ```
27
+ <s>[INST] {system} [/INST]{response}</s>[INST] {user's message} [/INST]{response}</s>
28
+ ```
29
+
30
+ ## Parameters
31
+
32
+ I recommend running Temperature 1.0-1.2 with 0.1 Top A or 0.01-0.1 Min P, and with 0.8/1.75/2/0 DRY. Also works with lower Temperatures below 1.0. Nothing more needed.
33
+
34
+ ### Settings
35
+
36
+ You can use my exact settings from here (use the ones from the Mistral Base/Customized folder): https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main.
37
+
38
+ ## GGUF
39
+
40
+ https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B-GGUF
41
+
42
+ # NemoMix-Unleashed-12B
43
+
44
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
45
+
46
+ ## Merge Details
47
+ ### Merge Method
48
+
49
+ This model was merged using the della_linear merge method using E:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base.
50
+
51
+ ### Models Merged
52
+
53
+ The following models were included in the merge:
54
+ * E:\mergekit\intervitens_mini-magnum-12b-v1.1
55
+ * E:\mergekit\nbeerbower_mistral-nemo-bophades-12B
56
+ * E:\mergekit\Sao10K_MN-12B-Lyra-v1
57
+ * E:\mergekit\nbeerbower_mistral-nemo-gutenberg-12B
58
+ * E:\mergekit\mistralaiMistral-Nemo-Instruct-2407
59
+
60
+ ### Configuration
61
+
62
+ The following YAML configuration was used to produce this model:
63
+
64
+ ```yaml
65
+ models:
66
+ - model: E:\mergekit\mistralaiMistral-Nemo-Instruct-2407
67
+ parameters:
68
+ weight: 0.1
69
+ density: 0.4
70
+ - model: E:\mergekit\nbeerbower_mistral-nemo-bophades-12B
71
+ parameters:
72
+ weight: 0.12
73
+ density: 0.5
74
+ - model: E:\mergekit\nbeerbower_mistral-nemo-gutenberg-12B
75
+ parameters:
76
+ weight: 0.2
77
+ density: 0.6
78
+ - model: E:\mergekit\Sao10K_MN-12B-Lyra-v1
79
+ parameters:
80
+ weight: 0.25
81
+ density: 0.7
82
+ - model: E:\mergekit\intervitens_mini-magnum-12b-v1.1
83
+ parameters:
84
+ weight: 0.33
85
+ density: 0.8
86
+ merge_method: della_linear
87
+ base_model: E:\mergekit\mistralaiMistral-Nemo-Base-2407
88
+ parameters:
89
+ epsilon: 0.05
90
+ lambda: 1
91
+ dtype: bfloat16
92
+ tokenizer_source: base
93
+ ```
94
+
95
+ # Ko-fi
96
+ ## Enjoying what I do? Consider donating here, thank you!
97
+
98
+ https://ko-fi.com/spicy_marinara