chchen commited on
Commit
ec3c136
1 Parent(s): ef2828f

Model save

Browse files
Files changed (2) hide show
  1. README.md +77 -0
  2. trainer_log.jsonl +28 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - llama-factory
8
+ - generated_from_trainer
9
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
10
+ model-index:
11
+ - name: Mistral-7B-Instruct-v0.3-ORPO-SALT-HALF
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # Mistral-7B-Instruct-v0.3-ORPO-SALT-HALF
19
+
20
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.8506
23
+ - Rewards/chosen: -0.0787
24
+ - Rewards/rejected: -0.0996
25
+ - Rewards/accuracies: 0.5724
26
+ - Rewards/margins: 0.0209
27
+ - Logps/rejected: -0.9956
28
+ - Logps/chosen: -0.7867
29
+ - Logits/rejected: -3.1507
30
+ - Logits/chosen: -3.1305
31
+ - Sft Loss: 0.7867
32
+ - Odds Ratio Loss: 0.6382
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+ - learning_rate: 5e-06
52
+ - train_batch_size: 2
53
+ - eval_batch_size: 2
54
+ - seed: 42
55
+ - gradient_accumulation_steps: 8
56
+ - total_train_batch_size: 16
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_steps: 0.1
60
+ - num_epochs: 3.0
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
66
+ | 0.8758 | 0.8467 | 500 | 0.8691 | -0.0805 | -0.1009 | 0.5705 | 0.0203 | -1.0086 | -0.8054 | -3.1276 | -3.1089 | 0.8054 | 0.6371 |
67
+ | 0.8098 | 1.6935 | 1000 | 0.8549 | -0.0791 | -0.0999 | 0.5676 | 0.0207 | -0.9985 | -0.7911 | -3.1170 | -3.0966 | 0.7911 | 0.6375 |
68
+ | 0.8135 | 2.5402 | 1500 | 0.8506 | -0.0787 | -0.0996 | 0.5724 | 0.0209 | -0.9956 | -0.7867 | -3.1507 | -3.1305 | 0.7867 | 0.6382 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - PEFT 0.10.0
74
+ - Transformers 4.40.1
75
+ - Pytorch 2.3.0
76
+ - Datasets 2.19.0
77
+ - Tokenizers 0.19.1
trainer_log.jsonl CHANGED
@@ -151,3 +151,31 @@
151
  {"current_steps": 1490, "total_steps": 1770, "loss": 0.8252, "accuracy": 0.574999988079071, "learning_rate": 3.024615823368371e-07, "epoch": 2.523285351397121, "percentage": 84.18, "elapsed_time": "4:05:47", "remaining_time": "0:46:11"}
152
  {"current_steps": 1500, "total_steps": 1770, "loss": 0.8135, "accuracy": 0.581250011920929, "learning_rate": 2.8165102503600716e-07, "epoch": 2.5402201524132093, "percentage": 84.75, "elapsed_time": "4:07:23", "remaining_time": "0:44:31"}
153
  {"current_steps": 1500, "total_steps": 1770, "eval_loss": 0.8505691885948181, "epoch": 2.5402201524132093, "percentage": 84.75, "elapsed_time": "4:10:38", "remaining_time": "0:45:06"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  {"current_steps": 1490, "total_steps": 1770, "loss": 0.8252, "accuracy": 0.574999988079071, "learning_rate": 3.024615823368371e-07, "epoch": 2.523285351397121, "percentage": 84.18, "elapsed_time": "4:05:47", "remaining_time": "0:46:11"}
152
  {"current_steps": 1500, "total_steps": 1770, "loss": 0.8135, "accuracy": 0.581250011920929, "learning_rate": 2.8165102503600716e-07, "epoch": 2.5402201524132093, "percentage": 84.75, "elapsed_time": "4:07:23", "remaining_time": "0:44:31"}
153
  {"current_steps": 1500, "total_steps": 1770, "eval_loss": 0.8505691885948181, "epoch": 2.5402201524132093, "percentage": 84.75, "elapsed_time": "4:10:38", "remaining_time": "0:45:06"}
154
+ {"current_steps": 1510, "total_steps": 1770, "loss": 0.9194, "accuracy": 0.4625000059604645, "learning_rate": 2.615393769259039e-07, "epoch": 2.557154953429297, "percentage": 85.31, "elapsed_time": "4:12:19", "remaining_time": "0:43:26"}
155
+ {"current_steps": 1520, "total_steps": 1770, "loss": 0.7981, "accuracy": 0.5874999761581421, "learning_rate": 2.421329743475917e-07, "epoch": 2.574089754445385, "percentage": 85.88, "elapsed_time": "4:13:52", "remaining_time": "0:41:45"}
156
+ {"current_steps": 1530, "total_steps": 1770, "loss": 0.8756, "accuracy": 0.550000011920929, "learning_rate": 2.234379314486973e-07, "epoch": 2.5910245554614733, "percentage": 86.44, "elapsed_time": "4:15:31", "remaining_time": "0:40:04"}
157
+ {"current_steps": 1540, "total_steps": 1770, "loss": 0.8149, "accuracy": 0.6499999761581421, "learning_rate": 2.0546013825709783e-07, "epoch": 2.6079593564775614, "percentage": 87.01, "elapsed_time": "4:17:09", "remaining_time": "0:38:24"}
158
+ {"current_steps": 1550, "total_steps": 1770, "loss": 0.7295, "accuracy": 0.637499988079071, "learning_rate": 1.88205258825217e-07, "epoch": 2.6248941574936495, "percentage": 87.57, "elapsed_time": "4:18:49", "remaining_time": "0:36:44"}
159
+ {"current_steps": 1560, "total_steps": 1770, "loss": 0.7572, "accuracy": 0.612500011920929, "learning_rate": 1.7167872944552245e-07, "epoch": 2.6418289585097376, "percentage": 88.14, "elapsed_time": "4:20:24", "remaining_time": "0:35:03"}
160
+ {"current_steps": 1570, "total_steps": 1770, "loss": 0.8135, "accuracy": 0.5062500238418579, "learning_rate": 1.5588575693777142e-07, "epoch": 2.6587637595258258, "percentage": 88.7, "elapsed_time": "4:22:02", "remaining_time": "0:33:22"}
161
+ {"current_steps": 1580, "total_steps": 1770, "loss": 0.8605, "accuracy": 0.5249999761581421, "learning_rate": 1.4083131700856428e-07, "epoch": 2.675698560541914, "percentage": 89.27, "elapsed_time": "4:23:40", "remaining_time": "0:31:42"}
162
+ {"current_steps": 1590, "total_steps": 1770, "loss": 0.8044, "accuracy": 0.612500011920929, "learning_rate": 1.2652015268370315e-07, "epoch": 2.6926333615580016, "percentage": 89.83, "elapsed_time": "4:25:16", "remaining_time": "0:30:01"}
163
+ {"current_steps": 1600, "total_steps": 1770, "loss": 0.8914, "accuracy": 0.550000011920929, "learning_rate": 1.1295677281386502e-07, "epoch": 2.7095681625740897, "percentage": 90.4, "elapsed_time": "4:26:55", "remaining_time": "0:28:21"}
164
+ {"current_steps": 1610, "total_steps": 1770, "loss": 0.8777, "accuracy": 0.550000011920929, "learning_rate": 1.0014545065404973e-07, "epoch": 2.726502963590178, "percentage": 90.96, "elapsed_time": "4:28:32", "remaining_time": "0:26:41"}
165
+ {"current_steps": 1620, "total_steps": 1770, "loss": 0.8348, "accuracy": 0.550000011920929, "learning_rate": 8.809022251725502e-08, "epoch": 2.743437764606266, "percentage": 91.53, "elapsed_time": "4:30:10", "remaining_time": "0:25:00"}
166
+ {"current_steps": 1630, "total_steps": 1770, "loss": 0.8429, "accuracy": 0.574999988079071, "learning_rate": 7.679488650280509e-08, "epoch": 2.7603725656223537, "percentage": 92.09, "elapsed_time": "4:31:48", "remaining_time": "0:23:20"}
167
+ {"current_steps": 1640, "total_steps": 1770, "loss": 0.792, "accuracy": 0.5562499761581421, "learning_rate": 6.626300129972563e-08, "epoch": 2.777307366638442, "percentage": 92.66, "elapsed_time": "4:33:19", "remaining_time": "0:21:39"}
168
+ {"current_steps": 1650, "total_steps": 1770, "loss": 0.797, "accuracy": 0.643750011920929, "learning_rate": 5.649788506555065e-08, "epoch": 2.79424216765453, "percentage": 93.22, "elapsed_time": "4:34:54", "remaining_time": "0:19:59"}
169
+ {"current_steps": 1660, "total_steps": 1770, "loss": 0.8333, "accuracy": 0.5687500238418579, "learning_rate": 4.7502614380908474e-08, "epoch": 2.811176968670618, "percentage": 93.79, "elapsed_time": "4:36:31", "remaining_time": "0:18:19"}
170
+ {"current_steps": 1670, "total_steps": 1770, "loss": 0.8222, "accuracy": 0.581250011920929, "learning_rate": 3.9280023280222066e-08, "epoch": 2.828111769686706, "percentage": 94.35, "elapsed_time": "4:38:07", "remaining_time": "0:16:39"}
171
+ {"current_steps": 1680, "total_steps": 1770, "loss": 0.8739, "accuracy": 0.5874999761581421, "learning_rate": 3.1832702358818855e-08, "epoch": 2.8450465707027943, "percentage": 94.92, "elapsed_time": "4:39:50", "remaining_time": "0:14:59"}
172
+ {"current_steps": 1690, "total_steps": 1770, "loss": 0.8076, "accuracy": 0.574999988079071, "learning_rate": 2.5162997956746647e-08, "epoch": 2.8619813717188824, "percentage": 95.48, "elapsed_time": "4:41:32", "remaining_time": "0:13:19"}
173
+ {"current_steps": 1700, "total_steps": 1770, "loss": 0.8378, "accuracy": 0.581250011920929, "learning_rate": 1.9273011419536914e-08, "epoch": 2.8789161727349706, "percentage": 96.05, "elapsed_time": "4:43:01", "remaining_time": "0:11:39"}
174
+ {"current_steps": 1710, "total_steps": 1770, "loss": 0.82, "accuracy": 0.581250011920929, "learning_rate": 1.4164598436159083e-08, "epoch": 2.8958509737510583, "percentage": 96.61, "elapsed_time": "4:44:34", "remaining_time": "0:09:59"}
175
+ {"current_steps": 1720, "total_steps": 1770, "loss": 0.8046, "accuracy": 0.59375, "learning_rate": 9.839368454371556e-09, "epoch": 2.9127857747671464, "percentage": 97.18, "elapsed_time": "4:46:12", "remaining_time": "0:08:19"}
176
+ {"current_steps": 1730, "total_steps": 1770, "loss": 0.7977, "accuracy": 0.6187499761581421, "learning_rate": 6.298684173650649e-09, "epoch": 2.9297205757832345, "percentage": 97.74, "elapsed_time": "4:47:46", "remaining_time": "0:06:39"}
177
+ {"current_steps": 1740, "total_steps": 1770, "loss": 0.8552, "accuracy": 0.6000000238418579, "learning_rate": 3.543661115860686e-09, "epoch": 2.9466553767993227, "percentage": 98.31, "elapsed_time": "4:49:16", "remaining_time": "0:04:59"}
178
+ {"current_steps": 1750, "total_steps": 1770, "loss": 0.7914, "accuracy": 0.574999988079071, "learning_rate": 1.575167273800693e-09, "epoch": 2.963590177815411, "percentage": 98.87, "elapsed_time": "4:50:45", "remaining_time": "0:03:19"}
179
+ {"current_steps": 1760, "total_steps": 1770, "loss": 0.8971, "accuracy": 0.574999988079071, "learning_rate": 3.9382283773564676e-10, "epoch": 2.9805249788314985, "percentage": 99.44, "elapsed_time": "4:52:24", "remaining_time": "0:01:39"}
180
+ {"current_steps": 1770, "total_steps": 1770, "loss": 0.9206, "accuracy": 0.6187499761581421, "learning_rate": 0.0, "epoch": 2.9974597798475866, "percentage": 100.0, "elapsed_time": "4:53:57", "remaining_time": "0:00:00"}
181
+ {"current_steps": 1770, "total_steps": 1770, "epoch": 2.9974597798475866, "percentage": 100.0, "elapsed_time": "4:53:58", "remaining_time": "0:00:00"}