noeloco commited on
Commit
5546998
1 Parent(s): eab2fdd

End of training

Browse files
Files changed (2) hide show
  1. README.md +162 -0
  2. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - dpo
7
+ - trl
8
+ - generated_from_trainer
9
+ base_model: codellama/CodeLlama-7b-hf
10
+ model-index:
11
+ - name: modeltest1-dpo
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.4.0`
22
+ ```yaml
23
+ base_model: codellama/CodeLlama-7b-hf
24
+ model_type: LlamaForCausalLM
25
+ tokenizer_type: CodeLlamaTokenizer
26
+ is_llama_derived_model: true
27
+
28
+ hub_model_id: noeloco/modeltest1-dpo
29
+
30
+ load_in_8bit: false
31
+ load_in_4bit: true
32
+ strict: false
33
+
34
+ datasets:
35
+ - path: noeloco/fizzbuzz-sft
36
+ type: alpaca
37
+ ds_type: json
38
+
39
+ hf_use_auth_token: true
40
+ push_dataset_to_hub: noeloco
41
+ val_set_size: 0.05
42
+ output_dir: ./lora-out
43
+ chat_template: chatml
44
+
45
+ rl: dpo
46
+ datasets:
47
+ - path: noeloco/fizzbuzz-dpo
48
+ split: train
49
+ data_files:
50
+ - /tmp/fizzbuzz-ft/datasets/training-set-dpo.json
51
+ #type:
52
+ # field_prompt: question
53
+ # field_chosen: chosen
54
+ # field_rejected: rejected
55
+ ds_type: json
56
+ #type: intel_apply_chatml
57
+ type: chatml.intel
58
+
59
+ hf_use_auth_token: true
60
+ push_dataset_to_hub: noeloco
61
+ val_set_size: 0.05
62
+ output_dir: ./lora-out
63
+ chat_template: chatml
64
+
65
+
66
+ sequence_len: 2048
67
+ sample_packing: false
68
+ pad_to_sequence_len: true
69
+
70
+ adapter: lora
71
+ lora_model_dir:
72
+ lora_r: 16
73
+ lora_alpha: 8
74
+ lora_dropout: 0.05
75
+ lora_target_linear: true
76
+ lora_fan_in_fan_out:
77
+
78
+ wandb_project: runpod1
79
+ wandb_entity:
80
+ wandb_watch:
81
+ wandb_name:
82
+ wandb_log_model:
83
+
84
+ gradient_accumulation_steps: 1
85
+ micro_batch_size: 2
86
+ num_epochs: 3
87
+ optimizer: paged_adamw_32bit
88
+ lr_scheduler: cosine
89
+ learning_rate: 0.0002
90
+
91
+ train_on_inputs: false
92
+ group_by_length: false
93
+ bf16: true
94
+ fp16: false
95
+ tf32: false
96
+
97
+ gradient_checkpointing: true
98
+ early_stopping_patience:
99
+ resume_from_checkpoint:
100
+ local_rank:
101
+ logging_steps: 1
102
+ xformers_attention:
103
+ flash_attention: true
104
+
105
+ warmup_steps: 10
106
+ evals_per_epoch: 4
107
+ saves_per_epoch: 1
108
+ debug: true
109
+ deepspeed:
110
+ weight_decay: 0.0
111
+ fsdp:
112
+ fsdp_config:
113
+ special_tokens:
114
+ bos_token: "<s>"
115
+ eos_token: "</s>"
116
+ unk_token: "<unk>"
117
+
118
+ ```
119
+
120
+ </details><br>
121
+
122
+ # modeltest1-dpo
123
+
124
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
125
+
126
+ ## Model description
127
+
128
+ More information needed
129
+
130
+ ## Intended uses & limitations
131
+
132
+ More information needed
133
+
134
+ ## Training and evaluation data
135
+
136
+ More information needed
137
+
138
+ ## Training procedure
139
+
140
+ ### Training hyperparameters
141
+
142
+ The following hyperparameters were used during training:
143
+ - learning_rate: 0.0002
144
+ - train_batch_size: 2
145
+ - eval_batch_size: 8
146
+ - seed: 42
147
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
148
+ - lr_scheduler_type: cosine
149
+ - lr_scheduler_warmup_steps: 10
150
+ - training_steps: 222
151
+
152
+ ### Training results
153
+
154
+
155
+
156
+ ### Framework versions
157
+
158
+ - PEFT 0.10.1.dev0
159
+ - Transformers 4.40.0.dev0
160
+ - Pytorch 2.1.2+cu118
161
+ - Datasets 2.15.0
162
+ - Tokenizers 0.15.0
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2d4c0977c02e5eb8499d5e9d472853fe43fa4bbc11226f37b157a77b4674b08
3
+ size 4248847729