tsavage68 commited on
Commit
7756c21
1 Parent(s): 0271ed9

End of training

Browse files
README.md CHANGED
@@ -17,15 +17,15 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.6923
21
- - Rewards/chosen: -0.0044
22
- - Rewards/rejected: -0.0063
23
- - Rewards/accuracies: 0.5187
24
- - Rewards/margins: 0.0018
25
- - Logps/rejected: -21.6204
26
- - Logps/chosen: -20.8367
27
- - Logits/rejected: 14.2289
28
- - Logits/chosen: 14.2314
29
 
30
  ## Model description
31
 
@@ -59,26 +59,26 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
- | 0.6934 | 0.05 | 50 | 0.6935 | -0.0003 | 0.0002 | 0.5033 | -0.0005 | -21.5550 | -20.7952 | 14.2545 | 14.2571 |
63
- | 0.6945 | 0.1 | 100 | 0.6936 | 0.0001 | 0.0009 | 0.4857 | -0.0009 | -21.5484 | -20.7916 | 14.2577 | 14.2603 |
64
- | 0.6957 | 0.15 | 150 | 0.6933 | -0.0003 | -0.0000 | 0.4484 | -0.0002 | -21.5578 | -20.7949 | 14.2444 | 14.2470 |
65
- | 0.6921 | 0.2 | 200 | 0.6933 | 0.0035 | 0.0038 | 0.5011 | -0.0002 | -21.5199 | -20.7570 | 14.2457 | 14.2483 |
66
- | 0.6948 | 0.24 | 250 | 0.6935 | -0.0006 | -0.0001 | 0.4923 | -0.0005 | -21.5583 | -20.7983 | 14.2481 | 14.2507 |
67
- | 0.6924 | 0.29 | 300 | 0.6922 | 0.0008 | -0.0011 | 0.5407 | 0.0019 | -21.5685 | -20.7839 | 14.2444 | 14.2470 |
68
- | 0.6916 | 0.34 | 350 | 0.6930 | -0.0007 | -0.0012 | 0.5055 | 0.0005 | -21.5698 | -20.7993 | 14.2356 | 14.2383 |
69
- | 0.6904 | 0.39 | 400 | 0.6928 | -0.0043 | -0.0051 | 0.5077 | 0.0009 | -21.6086 | -20.8348 | 14.2367 | 14.2393 |
70
- | 0.6904 | 0.44 | 450 | 0.6923 | -0.0042 | -0.0060 | 0.5582 | 0.0018 | -21.6177 | -20.8341 | 14.2352 | 14.2378 |
71
- | 0.6904 | 0.49 | 500 | 0.6929 | -0.0049 | -0.0057 | 0.5297 | 0.0008 | -21.6144 | -20.8416 | 14.2367 | 14.2393 |
72
- | 0.6893 | 0.54 | 550 | 0.6924 | -0.0032 | -0.0048 | 0.5385 | 0.0016 | -21.6055 | -20.8238 | 14.2319 | 14.2345 |
73
- | 0.6903 | 0.59 | 600 | 0.6923 | -0.0045 | -0.0063 | 0.5055 | 0.0018 | -21.6203 | -20.8373 | 14.2321 | 14.2347 |
74
- | 0.6907 | 0.64 | 650 | 0.6923 | -0.0038 | -0.0057 | 0.5121 | 0.0019 | -21.6141 | -20.8299 | 14.2239 | 14.2265 |
75
- | 0.6913 | 0.68 | 700 | 0.6926 | -0.0045 | -0.0058 | 0.5231 | 0.0014 | -21.6159 | -20.8372 | 14.2301 | 14.2327 |
76
- | 0.6909 | 0.73 | 750 | 0.6917 | -0.0036 | -0.0067 | 0.5451 | 0.0031 | -21.6244 | -20.8281 | 14.2134 | 14.2160 |
77
- | 0.6876 | 0.78 | 800 | 0.6928 | -0.0046 | -0.0056 | 0.5187 | 0.0009 | -21.6130 | -20.8387 | 14.2215 | 14.2241 |
78
- | 0.6985 | 0.83 | 850 | 0.6920 | -0.0040 | -0.0065 | 0.5560 | 0.0025 | -21.6226 | -20.8319 | 14.2307 | 14.2334 |
79
- | 0.6912 | 0.88 | 900 | 0.6925 | -0.0036 | -0.0051 | 0.5209 | 0.0014 | -21.6082 | -20.8285 | 14.2279 | 14.2304 |
80
- | 0.6931 | 0.93 | 950 | 0.6923 | -0.0044 | -0.0063 | 0.5187 | 0.0018 | -21.6204 | -20.8367 | 14.2289 | 14.2314 |
81
- | 0.6914 | 0.98 | 1000 | 0.6923 | -0.0044 | -0.0063 | 0.5187 | 0.0018 | -21.6204 | -20.8367 | 14.2289 | 14.2314 |
82
 
83
 
84
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6924
21
+ - Rewards/chosen: -0.0146
22
+ - Rewards/rejected: -0.0175
23
+ - Rewards/accuracies: 0.5275
24
+ - Rewards/margins: 0.0029
25
+ - Logps/rejected: -21.6159
26
+ - Logps/chosen: -20.8410
27
+ - Logits/rejected: 14.2241
28
+ - Logits/chosen: 14.2267
29
 
30
  ## Model description
31
 
 
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
+ | 0.6908 | 0.05 | 50 | 0.6958 | -0.0024 | 0.0016 | 0.4835 | -0.0040 | -21.5521 | -20.8002 | 14.2618 | 14.2644 |
63
+ | 0.7007 | 0.1 | 100 | 0.6940 | -0.0004 | -0.0001 | 0.5033 | -0.0003 | -21.5577 | -20.7936 | 14.2508 | 14.2534 |
64
+ | 0.6945 | 0.15 | 150 | 0.6935 | -0.0010 | -0.0016 | 0.4923 | 0.0006 | -21.5629 | -20.7956 | 14.2501 | 14.2527 |
65
+ | 0.6911 | 0.2 | 200 | 0.6947 | 0.0111 | 0.0130 | 0.5055 | -0.0019 | -21.5142 | -20.7552 | 14.2536 | 14.2561 |
66
+ | 0.6944 | 0.24 | 250 | 0.6926 | -0.0007 | -0.0032 | 0.5297 | 0.0025 | -21.5681 | -20.7945 | 14.2489 | 14.2515 |
67
+ | 0.6893 | 0.29 | 300 | 0.6925 | -0.0029 | -0.0056 | 0.5143 | 0.0027 | -21.5761 | -20.8017 | 14.2454 | 14.2480 |
68
+ | 0.6964 | 0.34 | 350 | 0.6933 | -0.0031 | -0.0043 | 0.4901 | 0.0012 | -21.5718 | -20.8026 | 14.2500 | 14.2526 |
69
+ | 0.6846 | 0.39 | 400 | 0.6899 | -0.0142 | -0.0220 | 0.5516 | 0.0078 | -21.6306 | -20.8394 | 14.2259 | 14.2284 |
70
+ | 0.6823 | 0.44 | 450 | 0.6910 | -0.0143 | -0.0200 | 0.5143 | 0.0056 | -21.6240 | -20.8400 | 14.2294 | 14.2320 |
71
+ | 0.6838 | 0.49 | 500 | 0.6908 | -0.0099 | -0.0159 | 0.5297 | 0.0059 | -21.6103 | -20.8253 | 14.2237 | 14.2263 |
72
+ | 0.678 | 0.54 | 550 | 0.6897 | -0.0151 | -0.0234 | 0.5407 | 0.0082 | -21.6354 | -20.8427 | 14.2251 | 14.2277 |
73
+ | 0.6872 | 0.59 | 600 | 0.6915 | -0.0176 | -0.0223 | 0.5385 | 0.0047 | -21.6318 | -20.8508 | 14.2284 | 14.2311 |
74
+ | 0.6881 | 0.64 | 650 | 0.6906 | -0.0132 | -0.0196 | 0.5319 | 0.0064 | -21.6228 | -20.8362 | 14.2236 | 14.2262 |
75
+ | 0.6841 | 0.68 | 700 | 0.6910 | -0.0146 | -0.0202 | 0.5143 | 0.0057 | -21.6249 | -20.8408 | 14.2152 | 14.2178 |
76
+ | 0.6883 | 0.73 | 750 | 0.6901 | -0.0148 | -0.0223 | 0.5626 | 0.0075 | -21.6317 | -20.8414 | 14.2218 | 14.2244 |
77
+ | 0.6813 | 0.78 | 800 | 0.6917 | -0.0150 | -0.0192 | 0.5341 | 0.0041 | -21.6213 | -20.8422 | 14.2255 | 14.2281 |
78
+ | 0.6987 | 0.83 | 850 | 0.6902 | -0.0129 | -0.0204 | 0.5297 | 0.0075 | -21.6253 | -20.8350 | 14.2198 | 14.2223 |
79
+ | 0.687 | 0.88 | 900 | 0.6928 | -0.0126 | -0.0148 | 0.5121 | 0.0021 | -21.6067 | -20.8343 | 14.2248 | 14.2275 |
80
+ | 0.6885 | 0.93 | 950 | 0.6924 | -0.0146 | -0.0175 | 0.5275 | 0.0029 | -21.6159 | -20.8410 | 14.2241 | 14.2267 |
81
+ | 0.6904 | 0.98 | 1000 | 0.6924 | -0.0146 | -0.0175 | 0.5275 | 0.0029 | -21.6159 | -20.8410 | 14.2241 | 14.2267 |
82
 
83
 
84
  ### Framework versions
final_checkpoint/model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:780b60f82227d9c36b067e8c836d71ba719b98838c1c7658124766b999e861a6
3
  size 4976746424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ddbc2416af7c555f5b6569d06296d51fded87e6201d52416b93d6ec49565243
3
  size 4976746424
final_checkpoint/model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c0939291d61728558bd0e53e7870fa38cf14192f133abadad127dec6c1f93162
3
  size 4966260992
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:501f4a75d860cd8b4972acc1c9175a2901cf54dfd4f3f48ec92a8ee9e01ab46b
3
  size 4966260992
final_checkpoint/model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:06baa994d02010396c985af33be49f776d4d10aa1cf49b6e17487221a81a2d9d
3
  size 3355588232
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0b40fa34bf684412c1165c08848d30a9038a89eab9389392959b9ee51d0cac1
3
  size 3355588232
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:780b60f82227d9c36b067e8c836d71ba719b98838c1c7658124766b999e861a6
3
  size 4976746424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ddbc2416af7c555f5b6569d06296d51fded87e6201d52416b93d6ec49565243
3
  size 4976746424
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c0939291d61728558bd0e53e7870fa38cf14192f133abadad127dec6c1f93162
3
  size 4966260992
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:501f4a75d860cd8b4972acc1c9175a2901cf54dfd4f3f48ec92a8ee9e01ab46b
3
  size 4966260992
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:06baa994d02010396c985af33be49f776d4d10aa1cf49b6e17487221a81a2d9d
3
  size 3355588232
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0b40fa34bf684412c1165c08848d30a9038a89eab9389392959b9ee51d0cac1
3
  size 3355588232
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b3277694b6d2635cf83e2b12a0e856bbfb5040e92b88f24e3656db3eb54f30b
3
  size 4475
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64b58e8f0725e519fe2d847968334783aa757f7fead7d0c9c20414b8b081362b
3
  size 4475