er1123090 commited on
Commit
f9924d1
1 Parent(s): df87784

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +149 -1
README.md CHANGED
@@ -1,3 +1,151 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ tags:
4
+ - generated_from_trainer
5
+ base_model: JY623/KoSOLAR-v2.0
6
+ model-index:
7
+ - name: qlora-out/v1.2
8
+ results: []
9
  ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
+ <details><summary>See axolotl config</summary>
16
+
17
+ axolotl version: `0.4.0`
18
+ ```yaml
19
+ base_model: JY623/KoSOLAR-v2.0
20
+ model_type: AutoModelForCausalLM
21
+ tokenizer_type: AutoTokenizer
22
+
23
+ load_in_8bit: false
24
+ load_in_4bit: true
25
+ strict: false
26
+
27
+ push_dataset_to_hub:
28
+ datasets:
29
+ - path: kyujinpy/KOR-OpenOrca-Platypus-v3
30
+ type: alpaca
31
+ dataset_prepared_path:
32
+ val_set_size: 0.05
33
+ output_dir: ./qlora-out/v1.2
34
+
35
+ adapter: qlora
36
+ lora_model_dir:
37
+
38
+ sequence_len: 4096
39
+ sample_packing: true
40
+ pad_to_sequence_len: true
41
+
42
+ lora_r: 32
43
+ lora_alpha: 16
44
+ lora_dropout: 0.05
45
+ lora_target_modules:
46
+ lora_target_linear: true
47
+ lora_fan_in_fan_out:
48
+
49
+ wandb_project:
50
+ wandb_entity:
51
+ wandb_watch:
52
+ wandb_name:
53
+ wandb_log_model:
54
+
55
+ gradient_accumulation_steps: 4
56
+ micro_batch_size: 1
57
+ num_epochs: 2
58
+ optimizer: paged_adamw_32bit
59
+ lr_scheduler: cosine
60
+ learning_rate: 0.0002
61
+
62
+ train_on_inputs: false
63
+ group_by_length: false
64
+ bf16: auto
65
+ fp16:
66
+ tf32: false
67
+
68
+ gradient_checkpointing: true
69
+ early_stopping_patience:
70
+ resume_from_checkpoint:
71
+ local_rank:
72
+ logging_steps: 1
73
+ xformers_attention:
74
+ flash_attention: false
75
+
76
+ warmup_steps: 20
77
+ evals_per_epoch: 4
78
+ eval_table_size:
79
+ saves_per_epoch: 1
80
+ debug:
81
+ deepspeed:
82
+ weight_decay: 0.1
83
+ fsdp:
84
+ fsdp_config:
85
+ special_tokens:
86
+ bos_token: "<s>"
87
+ eos_token: "</s>"
88
+ unk_token: "<unk>"
89
+ ```
90
+
91
+ </details><br>
92
+
93
+ # qlora-out/v1.2
94
+
95
+ This model is a fine-tuned version of [JY623/KoSOLAR-v2.0](https://huggingface.co/JY623/KoSOLAR-v2.0) on the None dataset.
96
+ It achieves the following results on the evaluation set:
97
+ - Loss: 5.1419
98
+
99
+ ## Model description
100
+
101
+ More information needed
102
+
103
+ ## Intended uses & limitations
104
+
105
+ More information needed
106
+
107
+ ## Training and evaluation data
108
+
109
+ More information needed
110
+
111
+ ## Training procedure
112
+
113
+ ### Training hyperparameters
114
+
115
+ The following hyperparameters were used during training:
116
+ - learning_rate: 0.0002
117
+ - train_batch_size: 1
118
+ - eval_batch_size: 1
119
+ - seed: 42
120
+ - distributed_type: multi-GPU
121
+ - num_devices: 7
122
+ - gradient_accumulation_steps: 4
123
+ - total_train_batch_size: 28
124
+ - total_eval_batch_size: 7
125
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
126
+ - lr_scheduler_type: cosine
127
+ - lr_scheduler_warmup_steps: 20
128
+ - num_epochs: 2
129
+
130
+ ### Training results
131
+
132
+ | Training Loss | Epoch | Step | Validation Loss |
133
+ |:-------------:|:-----:|:----:|:---------------:|
134
+ | 13.4775 | 0.0 | 1 | 13.4330 |
135
+ | 6.9219 | 0.25 | 64 | 6.2022 |
136
+ | 5.5416 | 0.5 | 128 | 5.2780 |
137
+ | 5.4282 | 0.75 | 192 | 5.1929 |
138
+ | 5.4864 | 1.0 | 256 | 5.1416 |
139
+ | 5.2877 | 1.24 | 320 | 5.1441 |
140
+ | 5.1731 | 1.49 | 384 | 5.1413 |
141
+ | 5.6221 | 1.74 | 448 | 5.1406 |
142
+ | 5.3737 | 1.99 | 512 | 5.1419 |
143
+
144
+
145
+ ### Framework versions
146
+
147
+ - PEFT 0.9.0
148
+ - Transformers 4.40.0.dev0
149
+ - Pytorch 2.2.1+cu121
150
+ - Datasets 2.18.0
151
+ - Tokenizers 0.15.0