aashish1904 commited on
Commit
3546e8c
1 Parent(s): 1eb8b4a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - akjindal53244/Llama-3.1-Storm-8B
6
+ - Sao10K/L3.1-8B-Niitama-v1.1
7
+ - v000000/L3.1-Niitorm-8B-t0.0001
8
+ library_name: transformers
9
+ tags:
10
+ - merge
11
+ - llama
12
+ - dpo
13
+ datasets:
14
+ - jondurbin/gutenberg-dpo-v0.1
15
+
16
+ ---
17
+
18
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
19
+
20
+
21
+ # QuantFactory/L3.1-Niitorm-8B-DPO-t0.0001-GGUF
22
+ This is quantized version of [v000000/L3.1-Niitorm-8B-DPO-t0.0001](https://huggingface.co/v000000/L3.1-Niitorm-8B-DPO-t0.0001) created using llama.cpp
23
+
24
+ # Original Model Card
25
+
26
+
27
+ # Llama-3.1-Niitorm-8B-DPO
28
+
29
+ * *DPO Trained, Llama3.1-8B.*
30
+
31
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/QeNjtwolNpxUmpo9NL7VI.png)
32
+
33
+ <b>New: DPO'd Gutenberg Version (full epoch training).</b>
34
+
35
+ RP model, Niitama 1.1 as a base, nearswapped with one of the smartest 3.1 models "Storm", then DPO'd, mostly abliterated.
36
+
37
+ Essentially, it's an improved Niitama 1.1
38
+
39
+ -------------------------------------------------------------------------------
40
+
41
+ *Gutenberg DPO creates more human-like prose/story writing and greately lessen synthetic feeling outputs.*
42
+
43
+ -------------------------------------------------------------------------------
44
+
45
+ # *llama.cpp:*
46
+
47
+ # thank you, mradermacher (GGUF)
48
+
49
+ * [GGUF static](https://huggingface.co/mradermacher/L3.1-Niitorm-8B-DPO-t0.0001-GGUF)
50
+
51
+ * [GGUF Imatrix](https://huggingface.co/mradermacher/L3.1-Niitorm-8B-DPO-t0.0001-i1-GGUF)
52
+
53
+ # v0 (GGUF)
54
+
55
+ * [GGUF Imatrix](https://huggingface.co/v000000/L3.1-Niitorm-8B-DPO-t0.0001-GGUFs-IMATRIX) *-only q8, q6 k, q5 k s, q4 k s, iq4 x s*
56
+
57
+
58
+ ## Finetune and merge
59
+
60
+ This is a merge and finetune of pre-trained language models.
61
+
62
+ *Resultant merge finetuned* on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) for 1 epoch, 1.5e-5 learning rate, on Nvidia A100.
63
+
64
+ ## Merge Details
65
+ ### Merge Method
66
+
67
+ This model was merged using the <b>NEARSWAP t0.0001</b> merge algorithm.
68
+
69
+ ### Models Merged
70
+
71
+ The following models were included in the merge:
72
+ * Base Model: [Sao10K/L3.1-8B-Niitama-v1.1](https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
73
+ * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
74
+
75
+ ### Configuration
76
+
77
+ The following YAML configuration was used to produce this model:
78
+
79
+ ```yaml
80
+ slices:
81
+ - sources:
82
+ - model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
83
+ layer_range: [0, 32]
84
+ - model: akjindal53244/Llama-3.1-Storm-8B
85
+ layer_range: [0, 32]
86
+ merge_method: nearswap
87
+ base_model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
88
+ parameters:
89
+ t:
90
+ - value: 0.0001
91
+ dtype: float16
92
+
93
+ # Then, DPO Finetune
94
+ # [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1)
95
+
96
+ ```
97
+
98
+ ### DPO Notes
99
+
100
+ *I used a higher learning rate and full dataset when training compared to my "L3.1-Celestial-Stone-2x8B-DPO". This caused lower loss and better adaption to the chosen style.*
101
+
102
+ -------------------------------------------------------------------------------
103
+
104
+ # Prompt Template:
105
+ ```bash
106
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
107
+
108
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
109
+
110
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
111
+
112
+ {output}<|eot_id|>
113
+
114
+ ```
115
+
116
+ Credit to Alchemonaut.
117
+
118
+ Credit to Sao10K.
119
+
120
+ Credit to Grimjim.
121
+
122
+ Credit to mlabonne.
123
+
124
+ Credit to jondurbin.
125
+
126
+ Credit to woofwolfy.