End of training
Browse files- README.md +263 -0
- adapter_model.bin +3 -0
README.md
ADDED
@@ -0,0 +1,263 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: peft
|
4 |
+
tags:
|
5 |
+
- axolotl
|
6 |
+
- generated_from_trainer
|
7 |
+
base_model: T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0
|
8 |
+
model-index:
|
9 |
+
- name: T3Q-LLM-sft1.0-dpo1.0_4300QA
|
10 |
+
results: []
|
11 |
+
---
|
12 |
+
|
13 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
+
should probably proofread and complete it, then remove this comment. -->
|
15 |
+
|
16 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
17 |
+
<details><summary>See axolotl config</summary>
|
18 |
+
|
19 |
+
axolotl version: `0.4.0`
|
20 |
+
```yaml
|
21 |
+
base_model: T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0
|
22 |
+
base_model_config: T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0
|
23 |
+
model_type: AutoModelForCausalLM
|
24 |
+
tokenizer_type: AutoTokenizer
|
25 |
+
is_llama_derived_model: true
|
26 |
+
hub_model_id: T3Q-LLM-sft1.0-dpo1.0_4300QA
|
27 |
+
|
28 |
+
load_in_8bit: false
|
29 |
+
load_in_4bit: true
|
30 |
+
strict: false
|
31 |
+
|
32 |
+
datasets:
|
33 |
+
# - path: admin_data.csv
|
34 |
+
- path: superiort/multiplechoice-4300
|
35 |
+
type: alpaca
|
36 |
+
# The below are defaults. only set what's needed if you use a different column name.
|
37 |
+
# system_prompt: ""
|
38 |
+
# system_format: "{system}"
|
39 |
+
# field_system: system
|
40 |
+
# field_instruction: instruction
|
41 |
+
# field_input: input
|
42 |
+
# field_output: output
|
43 |
+
|
44 |
+
# format: |-
|
45 |
+
# Human: {instruction} {input}
|
46 |
+
# Assistant:
|
47 |
+
|
48 |
+
# no_input_format: "{instruction} "
|
49 |
+
|
50 |
+
# dataset_prepared_path: yanolja_preprocessed_data
|
51 |
+
dataset_prepared_path: last_run_prepared
|
52 |
+
val_set_size: 0.2
|
53 |
+
output_dir: ./T3Q-LLM-sft1.0-dpo1.0_4300QA
|
54 |
+
|
55 |
+
adapter: qlora
|
56 |
+
lora_model_dir:
|
57 |
+
|
58 |
+
# device_map: [0,1,3]
|
59 |
+
|
60 |
+
sequence_len: 4096
|
61 |
+
sample_packing: false
|
62 |
+
|
63 |
+
lora_r: 32
|
64 |
+
lora_alpha: 16
|
65 |
+
lora_dropout: 0.05
|
66 |
+
lora_target_modules:
|
67 |
+
lora_target_linear: true
|
68 |
+
lora_fan_in_fan_out:
|
69 |
+
|
70 |
+
wandb_project: axolotl_T3Q_4300
|
71 |
+
wandb_entity:
|
72 |
+
wandb_watch:
|
73 |
+
wandb_run_id: T3Q_mod_4300
|
74 |
+
wandb_log_model:
|
75 |
+
|
76 |
+
gradient_accumulation_steps: 4
|
77 |
+
micro_batch_size: 2
|
78 |
+
num_epochs: 10
|
79 |
+
optimizer: paged_adamw_32bit
|
80 |
+
lr_scheduler: cosine
|
81 |
+
learning_rate: 0.0002
|
82 |
+
|
83 |
+
train_on_inputs: false
|
84 |
+
group_by_length: false
|
85 |
+
bf16: true
|
86 |
+
fp16: false
|
87 |
+
tf32: false
|
88 |
+
|
89 |
+
gradient_checkpointing: true
|
90 |
+
early_stopping_patience:
|
91 |
+
resume_from_checkpoint:
|
92 |
+
local_rank:
|
93 |
+
logging_steps: 1
|
94 |
+
xformers_attention:
|
95 |
+
flash_attention: true
|
96 |
+
|
97 |
+
warmup_steps: 100
|
98 |
+
eval_steps: 0.01
|
99 |
+
save_strategy: epoch
|
100 |
+
save_steps:
|
101 |
+
debug:
|
102 |
+
deepspeed:
|
103 |
+
weight_decay: 0.0
|
104 |
+
fsdp:
|
105 |
+
fsdp_config:
|
106 |
+
special_tokens:
|
107 |
+
bos_token: "<s>"
|
108 |
+
eos_token: "<|im_end|>"
|
109 |
+
unk_token: "<unk>"
|
110 |
+
pad_token: "</s>" # EOS와 PAD가 동일
|
111 |
+
|
112 |
+
```
|
113 |
+
|
114 |
+
</details><br>
|
115 |
+
|
116 |
+
# T3Q-LLM-sft1.0-dpo1.0_4300QA
|
117 |
+
|
118 |
+
This model is a fine-tuned version of [T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0](https://huggingface.co/T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0) on the None dataset.
|
119 |
+
It achieves the following results on the evaluation set:
|
120 |
+
- Loss: 1.2288
|
121 |
+
|
122 |
+
## Model description
|
123 |
+
|
124 |
+
More information needed
|
125 |
+
|
126 |
+
## Intended uses & limitations
|
127 |
+
|
128 |
+
More information needed
|
129 |
+
|
130 |
+
## Training and evaluation data
|
131 |
+
|
132 |
+
More information needed
|
133 |
+
|
134 |
+
## Training procedure
|
135 |
+
|
136 |
+
### Training hyperparameters
|
137 |
+
|
138 |
+
The following hyperparameters were used during training:
|
139 |
+
- learning_rate: 0.0002
|
140 |
+
- train_batch_size: 2
|
141 |
+
- eval_batch_size: 2
|
142 |
+
- seed: 42
|
143 |
+
- distributed_type: multi-GPU
|
144 |
+
- num_devices: 4
|
145 |
+
- gradient_accumulation_steps: 4
|
146 |
+
- total_train_batch_size: 32
|
147 |
+
- total_eval_batch_size: 8
|
148 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
149 |
+
- lr_scheduler_type: cosine
|
150 |
+
- lr_scheduler_warmup_steps: 100
|
151 |
+
- num_epochs: 10
|
152 |
+
|
153 |
+
### Training results
|
154 |
+
|
155 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
156 |
+
|:-------------:|:------:|:----:|:---------------:|
|
157 |
+
| 1.2424 | 0.0093 | 1 | 1.0432 |
|
158 |
+
| 1.0333 | 0.1023 | 11 | 0.9004 |
|
159 |
+
| 0.8715 | 0.2047 | 22 | 0.7157 |
|
160 |
+
| 0.7053 | 0.3070 | 33 | 0.6548 |
|
161 |
+
| 0.6688 | 0.4093 | 44 | 0.6449 |
|
162 |
+
| 0.6823 | 0.5116 | 55 | 0.6282 |
|
163 |
+
| 0.5876 | 0.6140 | 66 | 0.6251 |
|
164 |
+
| 0.6994 | 0.7163 | 77 | 0.6290 |
|
165 |
+
| 0.6662 | 0.8186 | 88 | 0.6311 |
|
166 |
+
| 0.6239 | 0.9209 | 99 | 0.6338 |
|
167 |
+
| 0.5959 | 1.0233 | 110 | 0.6319 |
|
168 |
+
| 0.6408 | 1.1256 | 121 | 0.6668 |
|
169 |
+
| 0.595 | 1.2279 | 132 | 0.6221 |
|
170 |
+
| 0.5476 | 1.3302 | 143 | 0.6295 |
|
171 |
+
| 0.587 | 1.4326 | 154 | 0.6569 |
|
172 |
+
| 0.5867 | 1.5349 | 165 | 0.6208 |
|
173 |
+
| 0.5895 | 1.6372 | 176 | 0.6264 |
|
174 |
+
| 0.6581 | 1.7395 | 187 | 0.6208 |
|
175 |
+
| 0.5872 | 1.8419 | 198 | 0.6290 |
|
176 |
+
| 0.6314 | 1.9442 | 209 | 0.6243 |
|
177 |
+
| 0.4397 | 2.0465 | 220 | 0.6591 |
|
178 |
+
| 0.4568 | 2.1488 | 231 | 0.7095 |
|
179 |
+
| 0.422 | 2.2512 | 242 | 0.6914 |
|
180 |
+
| 0.453 | 2.3535 | 253 | 0.7001 |
|
181 |
+
| 0.4678 | 2.4558 | 264 | 0.6896 |
|
182 |
+
| 0.4335 | 2.5581 | 275 | 0.6776 |
|
183 |
+
| 0.4796 | 2.6605 | 286 | 0.6829 |
|
184 |
+
| 0.4637 | 2.7628 | 297 | 0.6742 |
|
185 |
+
| 0.4532 | 2.8651 | 308 | 0.6828 |
|
186 |
+
| 0.4348 | 2.9674 | 319 | 0.6836 |
|
187 |
+
| 0.2787 | 3.0698 | 330 | 0.8085 |
|
188 |
+
| 0.2336 | 3.1721 | 341 | 0.8380 |
|
189 |
+
| 0.2341 | 3.2744 | 352 | 0.7998 |
|
190 |
+
| 0.2393 | 3.3767 | 363 | 0.8041 |
|
191 |
+
| 0.2826 | 3.4791 | 374 | 0.8040 |
|
192 |
+
| 0.2505 | 3.5814 | 385 | 0.8099 |
|
193 |
+
| 0.3057 | 3.6837 | 396 | 0.8103 |
|
194 |
+
| 0.2789 | 3.7860 | 407 | 0.7964 |
|
195 |
+
| 0.269 | 3.8884 | 418 | 0.7891 |
|
196 |
+
| 0.2493 | 3.9907 | 429 | 0.7958 |
|
197 |
+
| 0.1193 | 4.0930 | 440 | 0.9242 |
|
198 |
+
| 0.1143 | 4.1953 | 451 | 0.9331 |
|
199 |
+
| 0.1147 | 4.2977 | 462 | 0.9112 |
|
200 |
+
| 0.1351 | 4.4 | 473 | 0.9290 |
|
201 |
+
| 0.0982 | 4.5023 | 484 | 0.9358 |
|
202 |
+
| 0.1011 | 4.6047 | 495 | 0.9279 |
|
203 |
+
| 0.09 | 4.7070 | 506 | 0.9289 |
|
204 |
+
| 0.1063 | 4.8093 | 517 | 0.9392 |
|
205 |
+
| 0.1038 | 4.9116 | 528 | 0.9267 |
|
206 |
+
| 0.0361 | 5.0140 | 539 | 0.9412 |
|
207 |
+
| 0.0371 | 5.1163 | 550 | 1.0589 |
|
208 |
+
| 0.033 | 5.2186 | 561 | 1.0253 |
|
209 |
+
| 0.0426 | 5.3209 | 572 | 1.0482 |
|
210 |
+
| 0.0357 | 5.4233 | 583 | 1.0388 |
|
211 |
+
| 0.0355 | 5.5256 | 594 | 1.0566 |
|
212 |
+
| 0.0373 | 5.6279 | 605 | 1.0470 |
|
213 |
+
| 0.0395 | 5.7302 | 616 | 1.0581 |
|
214 |
+
| 0.0366 | 5.8326 | 627 | 1.0696 |
|
215 |
+
| 0.0387 | 5.9349 | 638 | 1.0641 |
|
216 |
+
| 0.0127 | 6.0372 | 649 | 1.0692 |
|
217 |
+
| 0.0114 | 6.1395 | 660 | 1.1612 |
|
218 |
+
| 0.0105 | 6.2419 | 671 | 1.1575 |
|
219 |
+
| 0.0121 | 6.3442 | 682 | 1.1479 |
|
220 |
+
| 0.0082 | 6.4465 | 693 | 1.1591 |
|
221 |
+
| 0.011 | 6.5488 | 704 | 1.1669 |
|
222 |
+
| 0.0112 | 6.6512 | 715 | 1.1645 |
|
223 |
+
| 0.0109 | 6.7535 | 726 | 1.1628 |
|
224 |
+
| 0.0102 | 6.8558 | 737 | 1.1705 |
|
225 |
+
| 0.0098 | 6.9581 | 748 | 1.1769 |
|
226 |
+
| 0.006 | 7.0605 | 759 | 1.1840 |
|
227 |
+
| 0.0064 | 7.1628 | 770 | 1.2016 |
|
228 |
+
| 0.0063 | 7.2651 | 781 | 1.2133 |
|
229 |
+
| 0.0058 | 7.3674 | 792 | 1.2182 |
|
230 |
+
| 0.0056 | 7.4698 | 803 | 1.2218 |
|
231 |
+
| 0.0057 | 7.5721 | 814 | 1.2234 |
|
232 |
+
| 0.0059 | 7.6744 | 825 | 1.2245 |
|
233 |
+
| 0.0057 | 7.7767 | 836 | 1.2247 |
|
234 |
+
| 0.0048 | 7.8791 | 847 | 1.2247 |
|
235 |
+
| 0.0054 | 7.9814 | 858 | 1.2246 |
|
236 |
+
| 0.0051 | 8.0837 | 869 | 1.2252 |
|
237 |
+
| 0.0059 | 8.1860 | 880 | 1.2261 |
|
238 |
+
| 0.0053 | 8.2884 | 891 | 1.2272 |
|
239 |
+
| 0.0057 | 8.3907 | 902 | 1.2275 |
|
240 |
+
| 0.0056 | 8.4930 | 913 | 1.2280 |
|
241 |
+
| 0.0052 | 8.5953 | 924 | 1.2283 |
|
242 |
+
| 0.007 | 8.6977 | 935 | 1.2287 |
|
243 |
+
| 0.0052 | 8.8 | 946 | 1.2285 |
|
244 |
+
| 0.005 | 8.9023 | 957 | 1.2289 |
|
245 |
+
| 0.0056 | 9.0047 | 968 | 1.2288 |
|
246 |
+
| 0.005 | 9.1070 | 979 | 1.2289 |
|
247 |
+
| 0.0054 | 9.2093 | 990 | 1.2290 |
|
248 |
+
| 0.0053 | 9.3116 | 1001 | 1.2288 |
|
249 |
+
| 0.0049 | 9.4140 | 1012 | 1.2290 |
|
250 |
+
| 0.0052 | 9.5163 | 1023 | 1.2290 |
|
251 |
+
| 0.0058 | 9.6186 | 1034 | 1.2291 |
|
252 |
+
| 0.0059 | 9.7209 | 1045 | 1.2289 |
|
253 |
+
| 0.0055 | 9.8233 | 1056 | 1.2289 |
|
254 |
+
| 0.0054 | 9.9256 | 1067 | 1.2288 |
|
255 |
+
|
256 |
+
|
257 |
+
### Framework versions
|
258 |
+
|
259 |
+
- PEFT 0.10.0
|
260 |
+
- Transformers 4.40.1
|
261 |
+
- Pytorch 2.1.2+cu121
|
262 |
+
- Datasets 2.15.0
|
263 |
+
- Tokenizers 0.19.1
|
adapter_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c8b5dd44eebbdd932fda643d6b86c880bd1005e64260ac48fd496399b571610f
|
3 |
+
size 251901130
|