aashish1904 commited on
Commit
268b378
1 Parent(s): b060bca

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ datasets:
6
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
7
+ - anthracite-org/stheno-filtered-v1.1
8
+ - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
9
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
10
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
11
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
12
+ - anthracite-org/nopm_claude_writing_fixed
13
+ - anthracite-org/kalo_opus_misc_240827
14
+ language:
15
+ - en
16
+ - fr
17
+ - de
18
+ - es
19
+ - it
20
+ - pt
21
+ - ru
22
+ - zh
23
+ - ja
24
+ pipeline_tag: text-generation
25
+
26
+ ---
27
+
28
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
29
+
30
+ # QuantFactory/Crimson_Dawn-v0.2-GGUF
31
+ This is quantized version of [Epiculous/Crimson_Dawn-v0.2](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2) created using llama.cpp
32
+
33
+ # Original Model Card
34
+
35
+
36
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/AEWJsybnM6wILRxgwlmrU.png)
37
+
38
+ Taking what seemed to work out well with Crimson_Dawn-v0.1, the new Crimson_Dawn-v0.2 is the same training methodology, training on [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting.
39
+
40
+ # Quants!
41
+ <strong>full</strong> / [exl2](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2-exl2) / [gguf](https://huggingface.co/Epiculous/Crimson_Dawn-v0.2-GGUF)
42
+
43
+ ## Prompting
44
+ The v0.2 models are trained on ChatML, the prompting structure goes a little something like this:
45
+
46
+ ```
47
+ <|im_start|>user
48
+ Hi there!<|im_end|>
49
+ <|im_start|>assistant
50
+ Nice to meet you!<|im_end|>
51
+ <|im_start|>user
52
+ Can I ask a question?<|im_end|>
53
+ <|im_start|>assistant
54
+ ```
55
+
56
+ ### Context and Instruct
57
+ The v0.2 models are trained on ChatML, please use that Context and Instruct template.
58
+
59
+ ### Current Top Sampler Settings
60
+ [Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
61
+ [Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
62
+
63
+ ## Training
64
+ Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.
65
+
66
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)