AuriAetherwiing commited on
Commit
8a6cb38
1 Parent(s): a5348c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
5
+ - Nopm/Opus_WritingStruct
6
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
7
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
8
+ - Gryphe/ChatGPT-4o-Writing-Prompts
9
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
10
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
11
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
12
+ - allura-org/Celeste-1.x-data-mixture
13
+ - allura-org/shortstories_synthlabels
14
+ base_model:
15
+ - Qwen/Qwen2.5-14B
16
+ ---
17
+
18
+ **EVA Qwen2.5 14B 0.1**
19
+
20
+ <p>
21
+ A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-7B on mixture of synthetic and natural data.<br>
22
+ It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
23
+ </p>
24
+
25
+ <p>
26
+ <b>Version 0.1 notes:</b><br> Dataset was deduped and cleaned from version 0.0, sequence length was also increased. Resulting model seems to be stabler, and 0.0 problems with handling short inputs and min_p sampling seem to be gone.<br>
27
+ This version seems to be more or less optimal for the current data. It (again) started crashing on each checkpoint after some point, but it was less of a problem this time, as eval/loss already flatlined by that time. This is epoch 2.7 checkpoint.
28
+ </p>
29
+
30
+ <p>Note about quants: IMatrix quants are not recommended, they seem to destabilize Qwen based models for some reason. If you're using IMatrix quant and experiencing issues like model using asterisks for actions in chats, where actions are written in plain text - try using static GGUFs.</p>
31
+
32
+
33
+ <p>
34
+ <p>Prompt format is ChatML.</p><br>
35
+ <h3>Recommended sampler values:</h3>
36
+ <ul>
37
+ <li>Temperature: 0.87</li>
38
+ <li>Top-P: 0.81</li>
39
+ <li>Min-P: 0.0025</li>
40
+ <li>Repetition Penalty: 1.03</li>
41
+ </ul>
42
+ <p>Temperature lower than 1 is still recommended, but this version seems to perform ok with higher temp and Min-P. Problems with Min-P sampling seem to be finally solved.<p>
43
+
44
+ <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
45
+
46
+ - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
47
+ - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
48
+ </p>
49
+
50
+ <p>
51
+ <br>
52
+ <h3>
53
+ Training data:
54
+ </h3>
55
+ <ul>
56
+ <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
57
+ <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
58
+ <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
59
+ <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
60
+ <li>A cleaned subset (~3k rows) of shortstories_synthlabels by Auri</li>
61
+ <li>Synthstruct and SynthRP datasets by Epiculous</li>
62
+ </ul>
63
+ <h3>
64
+ Training time and hardware:
65
+ </h3>
66
+ <ul><li>TBA</li></ul><br>
67
+ </p>
68
+ Model was trained by Kearm and Auri.
69
+ <h4>Special thanks:</h4><ul>
70
+ <li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li>
71
+ <li>to Alpindale for helping with FFT config for Qwen2.5</li>
72
+ <li>to InfermaticAI's community for their continued support for our endeavors</li>
73
+ <li>and to Allura-org for support and feedback on EVA models.</li></ul>