jssky commited on
Commit
bd1ca02
·
verified ·
1 Parent(s): a263510

End of training

Browse files
Files changed (2) hide show
  1. README.md +6 -9
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -102,7 +102,7 @@ xformers_attention: null
102
 
103
  This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on the None dataset.
104
  It achieves the following results on the evaluation set:
105
- - Loss: 8.9346
106
 
107
  ## Model description
108
 
@@ -125,11 +125,8 @@ The following hyperparameters were used during training:
125
  - train_batch_size: 1
126
  - eval_batch_size: 1
127
  - seed: 42
128
- - distributed_type: multi-GPU
129
- - num_devices: 2
130
  - gradient_accumulation_steps: 4
131
- - total_train_batch_size: 8
132
- - total_eval_batch_size: 2
133
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
  - lr_scheduler_type: cosine
135
  - lr_scheduler_warmup_steps: 10
@@ -140,10 +137,10 @@ The following hyperparameters were used during training:
140
 
141
  | Training Loss | Epoch | Step | Validation Loss |
142
  |:-------------:|:------:|:----:|:---------------:|
143
- | 10.6832 | 0.0001 | 1 | 9.9402 |
144
- | 8.955 | 0.0002 | 3 | 9.9402 |
145
- | 8.8015 | 0.0003 | 6 | 9.9000 |
146
- | 9.2568 | 0.0005 | 9 | 8.9346 |
147
 
148
 
149
  ### Framework versions
 
102
 
103
  This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on the None dataset.
104
  It achieves the following results on the evaluation set:
105
+ - Loss: 8.5702
106
 
107
  ## Model description
108
 
 
125
  - train_batch_size: 1
126
  - eval_batch_size: 1
127
  - seed: 42
 
 
128
  - gradient_accumulation_steps: 4
129
+ - total_train_batch_size: 4
 
130
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
131
  - lr_scheduler_type: cosine
132
  - lr_scheduler_warmup_steps: 10
 
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
+ | 10.7397 | 0.0000 | 1 | 9.9402 |
141
+ | 6.4745 | 0.0001 | 3 | 9.9402 |
142
+ | 7.9432 | 0.0002 | 6 | 9.5620 |
143
+ | 8.8484 | 0.0002 | 9 | 8.5702 |
144
 
145
 
146
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c638d3d5980ceb5e2a3166b18dfa79735920eeae44823f325610fe56d8e6f150
3
  size 167934026
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93c9408cd260ca5b00e3a401c82bdbb34d29fe03cf0cf015c51800f206f7b2a
3
  size 167934026