willtensora commited on
Commit
64ba05b
·
verified ·
1 Parent(s): 6f25c2f

End of training

Browse files
Files changed (3) hide show
  1. README.md +25 -19
  2. generation_config.json +3 -2
  3. pytorch_model.bin +2 -2
README.md CHANGED
@@ -1,11 +1,12 @@
1
  ---
2
  library_name: transformers
3
- base_model: katuni4ka/tiny-random-qwen1.5-moe
 
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
- - name: e61e89f0-854a-4922-8d25-dae435e91af0
9
  results: []
10
  ---
11
 
@@ -17,20 +18,20 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  axolotl version: `0.4.1`
19
  ```yaml
20
- base_model: katuni4ka/tiny-random-qwen1.5-moe
21
  batch_size: 32
22
  bf16: true
23
  chat_template: tokenizer_default_fallback_alpaca
24
  datasets:
25
  - data_files:
26
- - 95544452e61c7393_train_data.json
27
  ds_type: json
28
  format: custom
29
- path: /workspace/input_data/95544452e61c7393_train_data.json
30
  type:
31
- field_input: input
32
- field_instruction: instruction
33
- field_output: output
34
  format: '{instruction} {input}'
35
  no_input_format: '{instruction}'
36
  system_format: '{system}'
@@ -40,7 +41,7 @@ flash_attention: true
40
  gpu_memory_limit: 80GiB
41
  gradient_checkpointing: true
42
  group_by_length: true
43
- hub_model_id: willtensora/e61e89f0-854a-4922-8d25-dae435e91af0
44
  hub_strategy: checkpoint
45
  learning_rate: 0.0002
46
  logging_steps: 10
@@ -56,13 +57,13 @@ sample_packing: false
56
  save_steps: 40
57
  save_total_limit: 1
58
  sequence_len: 2048
59
- tokenizer_type: Qwen2TokenizerFast
60
  train_on_inputs: false
61
  trust_remote_code: true
62
  val_set_size: 0.1
63
  wandb_entity: ''
64
  wandb_mode: online
65
- wandb_name: katuni4ka/tiny-random-qwen1.5-moe-/workspace/input_data/95544452e61c7393_train_data.json
66
  wandb_project: Gradients-On-Demand
67
  wandb_run: your_name
68
  wandb_runid: default
@@ -73,11 +74,11 @@ xformers_attention: true
73
 
74
  </details><br>
75
 
76
- # e61e89f0-854a-4922-8d25-dae435e91af0
77
 
78
- This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
79
  It achieves the following results on the evaluation set:
80
- - Loss: 11.6281
81
 
82
  ## Model description
83
 
@@ -106,16 +107,21 @@ The following hyperparameters were used during training:
106
  - total_eval_batch_size: 32
107
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
108
  - lr_scheduler_type: cosine
109
- - lr_scheduler_warmup_steps: 2
110
- - training_steps: 40
111
 
112
  ### Training results
113
 
114
  | Training Loss | Epoch | Step | Validation Loss |
115
  |:-------------:|:------:|:----:|:---------------:|
116
- | No log | 0.0031 | 1 | 11.9223 |
117
- | 11.7325 | 0.0629 | 20 | 11.6783 |
118
- | 11.6304 | 0.1258 | 40 | 11.6281 |
 
 
 
 
 
119
 
120
 
121
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ base_model: fxmarty/tiny-random-GemmaForCausalLM
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  model-index:
9
+ - name: fd1980a0-7e71-4e52-addb-318dca5991d5
10
  results: []
11
  ---
12
 
 
18
 
19
  axolotl version: `0.4.1`
20
  ```yaml
21
+ base_model: fxmarty/tiny-random-GemmaForCausalLM
22
  batch_size: 32
23
  bf16: true
24
  chat_template: tokenizer_default_fallback_alpaca
25
  datasets:
26
  - data_files:
27
+ - b7c2a4a781c93416_train_data.json
28
  ds_type: json
29
  format: custom
30
+ path: /workspace/input_data/b7c2a4a781c93416_train_data.json
31
  type:
32
+ field_input: context
33
+ field_instruction: question
34
+ field_output: answer
35
  format: '{instruction} {input}'
36
  no_input_format: '{instruction}'
37
  system_format: '{system}'
 
41
  gpu_memory_limit: 80GiB
42
  gradient_checkpointing: true
43
  group_by_length: true
44
+ hub_model_id: willtensora/fd1980a0-7e71-4e52-addb-318dca5991d5
45
  hub_strategy: checkpoint
46
  learning_rate: 0.0002
47
  logging_steps: 10
 
57
  save_steps: 40
58
  save_total_limit: 1
59
  sequence_len: 2048
60
+ tokenizer_type: GemmaTokenizerFast
61
  train_on_inputs: false
62
  trust_remote_code: true
63
  val_set_size: 0.1
64
  wandb_entity: ''
65
  wandb_mode: online
66
+ wandb_name: fxmarty/tiny-random-GemmaForCausalLM-/workspace/input_data/b7c2a4a781c93416_train_data.json
67
  wandb_project: Gradients-On-Demand
68
  wandb_run: your_name
69
  wandb_runid: default
 
74
 
75
  </details><br>
76
 
77
+ # fd1980a0-7e71-4e52-addb-318dca5991d5
78
 
79
+ This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
80
  It achieves the following results on the evaluation set:
81
+ - Loss: 11.7971
82
 
83
  ## Model description
84
 
 
107
  - total_eval_batch_size: 32
108
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
109
  - lr_scheduler_type: cosine
110
+ - lr_scheduler_warmup_steps: 7
111
+ - training_steps: 156
112
 
113
  ### Training results
114
 
115
  | Training Loss | Epoch | Step | Validation Loss |
116
  |:-------------:|:------:|:----:|:---------------:|
117
+ | No log | 0.0008 | 1 | 12.4537 |
118
+ | 12.4357 | 0.0161 | 20 | 12.4267 |
119
+ | 12.392 | 0.0322 | 40 | 12.3762 |
120
+ | 12.3026 | 0.0483 | 60 | 12.2651 |
121
+ | 12.1177 | 0.0645 | 80 | 12.0658 |
122
+ | 11.9286 | 0.0806 | 100 | 11.8860 |
123
+ | 11.8324 | 0.0967 | 120 | 11.8100 |
124
+ | 11.798 | 0.1128 | 140 | 11.7971 |
125
 
126
 
127
  ### Framework versions
generation_config.json CHANGED
@@ -1,7 +1,8 @@
1
  {
2
  "_from_model_config": true,
3
- "bos_token_id": 151643,
4
  "do_sample": true,
5
- "eos_token_id": 151643,
 
6
  "transformers_version": "4.46.0"
7
  }
 
1
  {
2
  "_from_model_config": true,
3
+ "bos_token_id": 2,
4
  "do_sample": true,
5
+ "eos_token_id": 1,
6
+ "pad_token_id": 0,
7
  "transformers_version": "4.46.0"
8
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:244c30fce0d5c4892e3b25d25e50c952fa49cb08493bb32684f850179545a7e3
3
- size 19817334
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bc76ae4e72c9fc13bfe9567ae655234c8d3f2fcf4460d169dedaebd1865dcc9
3
+ size 16392015