NicholasCorrado commited on
Commit
08be805
·
verified ·
1 Parent(s): 908fcf3

Model save

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: tinyllama-1.1b-chat-v1.0-ui-math-coding-dpo-2
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # tinyllama-1.1b-chat-v1.0-ui-math-coding-dpo-2
18
+
19
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.3224
22
+ - Rewards/chosen: -3.3080
23
+ - Rewards/rejected: -7.0491
24
+ - Rewards/accuracies: 0.9062
25
+ - Rewards/margins: 3.7411
26
+ - Logps/rejected: -1048.2871
27
+ - Logps/chosen: -689.9403
28
+ - Logits/rejected: -1.4398
29
+ - Logits/chosen: -1.4575
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-07
49
+ - train_batch_size: 8
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 8
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 256
56
+ - total_eval_batch_size: 64
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 2
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.2293 | 1.2210 | 1000 | 0.3224 | -3.3080 | -7.0491 | 0.9062 | 3.7411 | -1048.2871 | -689.9403 | -1.4398 | -1.4575 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.44.1
72
+ - Pytorch 2.1.2+cu121
73
+ - Datasets 2.21.0
74
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.2817063624168927,
5
+ "train_runtime": 11455.6006,
6
+ "train_samples": 209650,
7
+ "train_samples_per_second": 36.602,
8
+ "train_steps_per_second": 0.143
9
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": 2,
4
+ "max_length": 2048,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.44.1"
7
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.2817063624168927,
5
+ "train_runtime": 11455.6006,
6
+ "train_samples": 209650,
7
+ "train_samples_per_second": 36.602,
8
+ "train_steps_per_second": 0.143
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,2518 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.0,
5
+ "eval_steps": 1000,
6
+ "global_step": 1638,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.001221001221001221,
13
+ "grad_norm": 3.104700246084015,
14
+ "learning_rate": 3.048780487804878e-09,
15
+ "logits/chosen": -2.611332893371582,
16
+ "logits/rejected": -2.6034297943115234,
17
+ "logps/chosen": -424.76251220703125,
18
+ "logps/rejected": -401.40936279296875,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.01221001221001221,
28
+ "grad_norm": 3.069950647256397,
29
+ "learning_rate": 3.048780487804878e-08,
30
+ "logits/chosen": -2.483457326889038,
31
+ "logits/rejected": -2.5166726112365723,
32
+ "logps/chosen": -394.9632873535156,
33
+ "logps/rejected": -370.91741943359375,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.4444444477558136,
36
+ "rewards/chosen": -0.00035367117379792035,
37
+ "rewards/margins": -0.00030399090610444546,
38
+ "rewards/rejected": -4.968021312379278e-05,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.02442002442002442,
43
+ "grad_norm": 3.064473375199238,
44
+ "learning_rate": 6.097560975609756e-08,
45
+ "logits/chosen": -2.4844443798065186,
46
+ "logits/rejected": -2.5088613033294678,
47
+ "logps/chosen": -393.47589111328125,
48
+ "logps/rejected": -384.7496337890625,
49
+ "loss": 0.6931,
50
+ "rewards/accuracies": 0.5062500238418579,
51
+ "rewards/chosen": -0.0005507160676643252,
52
+ "rewards/margins": -0.0003282243851572275,
53
+ "rewards/rejected": -0.00022249165340326726,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.03663003663003663,
58
+ "grad_norm": 2.881896681708338,
59
+ "learning_rate": 9.146341463414634e-08,
60
+ "logits/chosen": -2.478980779647827,
61
+ "logits/rejected": -2.4746451377868652,
62
+ "logps/chosen": -400.7593994140625,
63
+ "logps/rejected": -374.46099853515625,
64
+ "loss": 0.693,
65
+ "rewards/accuracies": 0.53125,
66
+ "rewards/chosen": 0.0005641734460368752,
67
+ "rewards/margins": 0.00045302818762138486,
68
+ "rewards/rejected": 0.00011114527296740562,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.04884004884004884,
73
+ "grad_norm": 3.0295854115026777,
74
+ "learning_rate": 1.219512195121951e-07,
75
+ "logits/chosen": -2.462890386581421,
76
+ "logits/rejected": -2.483200788497925,
77
+ "logps/chosen": -397.7459411621094,
78
+ "logps/rejected": -385.4634704589844,
79
+ "loss": 0.6922,
80
+ "rewards/accuracies": 0.5625,
81
+ "rewards/chosen": 0.0019774003885686398,
82
+ "rewards/margins": 0.0016252705827355385,
83
+ "rewards/rejected": 0.00035212995135225356,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.06105006105006105,
88
+ "grad_norm": 2.9757197429390847,
89
+ "learning_rate": 1.524390243902439e-07,
90
+ "logits/chosen": -2.4540820121765137,
91
+ "logits/rejected": -2.4638664722442627,
92
+ "logps/chosen": -391.7041931152344,
93
+ "logps/rejected": -370.6739196777344,
94
+ "loss": 0.6906,
95
+ "rewards/accuracies": 0.6656249761581421,
96
+ "rewards/chosen": 0.005458775907754898,
97
+ "rewards/margins": 0.004881708417087793,
98
+ "rewards/rejected": 0.0005770674324594438,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.07326007326007326,
103
+ "grad_norm": 3.0702933396650827,
104
+ "learning_rate": 1.8292682926829268e-07,
105
+ "logits/chosen": -2.453782796859741,
106
+ "logits/rejected": -2.4780030250549316,
107
+ "logps/chosen": -375.0745544433594,
108
+ "logps/rejected": -376.1570739746094,
109
+ "loss": 0.6872,
110
+ "rewards/accuracies": 0.7281249761581421,
111
+ "rewards/chosen": 0.011652452871203423,
112
+ "rewards/margins": 0.010647130198776722,
113
+ "rewards/rejected": 0.0010053229052573442,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.08547008547008547,
118
+ "grad_norm": 2.9490982076861703,
119
+ "learning_rate": 2.1341463414634144e-07,
120
+ "logits/chosen": -2.5242810249328613,
121
+ "logits/rejected": -2.513012170791626,
122
+ "logps/chosen": -402.56036376953125,
123
+ "logps/rejected": -372.23004150390625,
124
+ "loss": 0.6833,
125
+ "rewards/accuracies": 0.7906249761581421,
126
+ "rewards/chosen": 0.01956186816096306,
127
+ "rewards/margins": 0.018751125782728195,
128
+ "rewards/rejected": 0.0008107417961582541,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.09768009768009768,
133
+ "grad_norm": 2.995960567198045,
134
+ "learning_rate": 2.439024390243902e-07,
135
+ "logits/chosen": -2.4664769172668457,
136
+ "logits/rejected": -2.512596607208252,
137
+ "logps/chosen": -388.77276611328125,
138
+ "logps/rejected": -386.55120849609375,
139
+ "loss": 0.6754,
140
+ "rewards/accuracies": 0.8062499761581421,
141
+ "rewards/chosen": 0.0328640341758728,
142
+ "rewards/margins": 0.035563308745622635,
143
+ "rewards/rejected": -0.0026992757339030504,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.10989010989010989,
148
+ "grad_norm": 3.0842908329196925,
149
+ "learning_rate": 2.7439024390243906e-07,
150
+ "logits/chosen": -2.583833932876587,
151
+ "logits/rejected": -2.6459927558898926,
152
+ "logps/chosen": -411.39117431640625,
153
+ "logps/rejected": -416.40771484375,
154
+ "loss": 0.6616,
155
+ "rewards/accuracies": 0.809374988079071,
156
+ "rewards/chosen": 0.04710187762975693,
157
+ "rewards/margins": 0.06961944699287415,
158
+ "rewards/rejected": -0.022517573088407516,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.1221001221001221,
163
+ "grad_norm": 2.588661111129042,
164
+ "learning_rate": 3.048780487804878e-07,
165
+ "logits/chosen": -2.5411438941955566,
166
+ "logits/rejected": -2.5637898445129395,
167
+ "logps/chosen": -382.1836853027344,
168
+ "logps/rejected": -380.8400573730469,
169
+ "loss": 0.6506,
170
+ "rewards/accuracies": 0.824999988079071,
171
+ "rewards/chosen": 0.053000353276729584,
172
+ "rewards/margins": 0.09493989497423172,
173
+ "rewards/rejected": -0.04193953052163124,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.1343101343101343,
178
+ "grad_norm": 2.921339715123654,
179
+ "learning_rate": 3.353658536585366e-07,
180
+ "logits/chosen": -2.5204200744628906,
181
+ "logits/rejected": -2.5622482299804688,
182
+ "logps/chosen": -393.4702453613281,
183
+ "logps/rejected": -395.1978454589844,
184
+ "loss": 0.6293,
185
+ "rewards/accuracies": 0.8062499761581421,
186
+ "rewards/chosen": 0.03344795107841492,
187
+ "rewards/margins": 0.16028472781181335,
188
+ "rewards/rejected": -0.12683679163455963,
189
+ "step": 110
190
+ },
191
+ {
192
+ "epoch": 0.14652014652014653,
193
+ "grad_norm": 2.963460425484982,
194
+ "learning_rate": 3.6585365853658536e-07,
195
+ "logits/chosen": -2.508514881134033,
196
+ "logits/rejected": -2.5441009998321533,
197
+ "logps/chosen": -383.43133544921875,
198
+ "logps/rejected": -407.09210205078125,
199
+ "loss": 0.5887,
200
+ "rewards/accuracies": 0.809374988079071,
201
+ "rewards/chosen": -0.013098609633743763,
202
+ "rewards/margins": 0.2991253733634949,
203
+ "rewards/rejected": -0.31222397089004517,
204
+ "step": 120
205
+ },
206
+ {
207
+ "epoch": 0.15873015873015872,
208
+ "grad_norm": 3.0254473498968553,
209
+ "learning_rate": 3.9634146341463414e-07,
210
+ "logits/chosen": -2.542419910430908,
211
+ "logits/rejected": -2.5631086826324463,
212
+ "logps/chosen": -407.2526550292969,
213
+ "logps/rejected": -442.8262634277344,
214
+ "loss": 0.5587,
215
+ "rewards/accuracies": 0.800000011920929,
216
+ "rewards/chosen": -0.11188334226608276,
217
+ "rewards/margins": 0.4068102240562439,
218
+ "rewards/rejected": -0.5186935663223267,
219
+ "step": 130
220
+ },
221
+ {
222
+ "epoch": 0.17094017094017094,
223
+ "grad_norm": 3.612015672751943,
224
+ "learning_rate": 4.268292682926829e-07,
225
+ "logits/chosen": -2.5460963249206543,
226
+ "logits/rejected": -2.595196485519409,
227
+ "logps/chosen": -435.4662170410156,
228
+ "logps/rejected": -484.08270263671875,
229
+ "loss": 0.5266,
230
+ "rewards/accuracies": 0.796875,
231
+ "rewards/chosen": -0.2931057810783386,
232
+ "rewards/margins": 0.5774662494659424,
233
+ "rewards/rejected": -0.8705719709396362,
234
+ "step": 140
235
+ },
236
+ {
237
+ "epoch": 0.18315018315018314,
238
+ "grad_norm": 4.357042344454044,
239
+ "learning_rate": 4.573170731707317e-07,
240
+ "logits/chosen": -2.4378104209899902,
241
+ "logits/rejected": -2.4749557971954346,
242
+ "logps/chosen": -473.6615295410156,
243
+ "logps/rejected": -519.7752685546875,
244
+ "loss": 0.5035,
245
+ "rewards/accuracies": 0.8062499761581421,
246
+ "rewards/chosen": -0.6644935607910156,
247
+ "rewards/margins": 0.7163723707199097,
248
+ "rewards/rejected": -1.3808658123016357,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.19536019536019536,
253
+ "grad_norm": 4.657915772314801,
254
+ "learning_rate": 4.878048780487804e-07,
255
+ "logits/chosen": -2.3850626945495605,
256
+ "logits/rejected": -2.404574394226074,
257
+ "logps/chosen": -475.3724060058594,
258
+ "logps/rejected": -554.40576171875,
259
+ "loss": 0.4585,
260
+ "rewards/accuracies": 0.793749988079071,
261
+ "rewards/chosen": -0.9302708506584167,
262
+ "rewards/margins": 0.9146777987480164,
263
+ "rewards/rejected": -1.8449485301971436,
264
+ "step": 160
265
+ },
266
+ {
267
+ "epoch": 0.20757020757020758,
268
+ "grad_norm": 5.145238611541166,
269
+ "learning_rate": 4.999795585653115e-07,
270
+ "logits/chosen": -2.4656128883361816,
271
+ "logits/rejected": -2.5041518211364746,
272
+ "logps/chosen": -500.9999084472656,
273
+ "logps/rejected": -621.8275146484375,
274
+ "loss": 0.4203,
275
+ "rewards/accuracies": 0.846875011920929,
276
+ "rewards/chosen": -1.0742014646530151,
277
+ "rewards/margins": 1.363214135169983,
278
+ "rewards/rejected": -2.437415599822998,
279
+ "step": 170
280
+ },
281
+ {
282
+ "epoch": 0.21978021978021978,
283
+ "grad_norm": 5.498621709371294,
284
+ "learning_rate": 4.998546507921325e-07,
285
+ "logits/chosen": -2.3531060218811035,
286
+ "logits/rejected": -2.3719255924224854,
287
+ "logps/chosen": -516.4880981445312,
288
+ "logps/rejected": -656.3202514648438,
289
+ "loss": 0.3875,
290
+ "rewards/accuracies": 0.8125,
291
+ "rewards/chosen": -1.3029719591140747,
292
+ "rewards/margins": 1.5469024181365967,
293
+ "rewards/rejected": -2.8498740196228027,
294
+ "step": 180
295
+ },
296
+ {
297
+ "epoch": 0.231990231990232,
298
+ "grad_norm": 4.777943322815817,
299
+ "learning_rate": 4.996162482680895e-07,
300
+ "logits/chosen": -2.383643388748169,
301
+ "logits/rejected": -2.390869617462158,
302
+ "logps/chosen": -557.1913452148438,
303
+ "logps/rejected": -686.9656372070312,
304
+ "loss": 0.3856,
305
+ "rewards/accuracies": 0.828125,
306
+ "rewards/chosen": -1.459194540977478,
307
+ "rewards/margins": 1.6121686697006226,
308
+ "rewards/rejected": -3.0713634490966797,
309
+ "step": 190
310
+ },
311
+ {
312
+ "epoch": 0.2442002442002442,
313
+ "grad_norm": 5.801515238018521,
314
+ "learning_rate": 4.992644592858842e-07,
315
+ "logits/chosen": -2.3923563957214355,
316
+ "logits/rejected": -2.444206476211548,
317
+ "logps/chosen": -532.6453247070312,
318
+ "logps/rejected": -670.8153076171875,
319
+ "loss": 0.3645,
320
+ "rewards/accuracies": 0.8187500238418579,
321
+ "rewards/chosen": -1.294365406036377,
322
+ "rewards/margins": 1.502074122428894,
323
+ "rewards/rejected": -2.7964394092559814,
324
+ "step": 200
325
+ },
326
+ {
327
+ "epoch": 0.2564102564102564,
328
+ "grad_norm": 7.316959768144989,
329
+ "learning_rate": 4.987994436432335e-07,
330
+ "logits/chosen": -2.342782497406006,
331
+ "logits/rejected": -2.362506151199341,
332
+ "logps/chosen": -557.2277221679688,
333
+ "logps/rejected": -723.668701171875,
334
+ "loss": 0.3664,
335
+ "rewards/accuracies": 0.846875011920929,
336
+ "rewards/chosen": -1.531425952911377,
337
+ "rewards/margins": 1.8236634731292725,
338
+ "rewards/rejected": -3.3550896644592285,
339
+ "step": 210
340
+ },
341
+ {
342
+ "epoch": 0.2686202686202686,
343
+ "grad_norm": 5.84414999103754,
344
+ "learning_rate": 4.982214125702845e-07,
345
+ "logits/chosen": -2.363788366317749,
346
+ "logits/rejected": -2.4076294898986816,
347
+ "logps/chosen": -515.8175659179688,
348
+ "logps/rejected": -681.577880859375,
349
+ "loss": 0.3513,
350
+ "rewards/accuracies": 0.856249988079071,
351
+ "rewards/chosen": -1.2184154987335205,
352
+ "rewards/margins": 1.8515632152557373,
353
+ "rewards/rejected": -3.069979190826416,
354
+ "step": 220
355
+ },
356
+ {
357
+ "epoch": 0.28083028083028083,
358
+ "grad_norm": 5.57281060925085,
359
+ "learning_rate": 4.975306286336627e-07,
360
+ "logits/chosen": -2.390061855316162,
361
+ "logits/rejected": -2.4118847846984863,
362
+ "logps/chosen": -558.8645629882812,
363
+ "logps/rejected": -748.7266845703125,
364
+ "loss": 0.3603,
365
+ "rewards/accuracies": 0.824999988079071,
366
+ "rewards/chosen": -1.5887969732284546,
367
+ "rewards/margins": 1.980838418006897,
368
+ "rewards/rejected": -3.5696358680725098,
369
+ "step": 230
370
+ },
371
+ {
372
+ "epoch": 0.29304029304029305,
373
+ "grad_norm": 5.714762684377855,
374
+ "learning_rate": 4.967274056172044e-07,
375
+ "logits/chosen": -2.3940510749816895,
376
+ "logits/rejected": -2.4176127910614014,
377
+ "logps/chosen": -543.5166015625,
378
+ "logps/rejected": -715.0543212890625,
379
+ "loss": 0.3498,
380
+ "rewards/accuracies": 0.84375,
381
+ "rewards/chosen": -1.3804460763931274,
382
+ "rewards/margins": 1.9572741985321045,
383
+ "rewards/rejected": -3.3377201557159424,
384
+ "step": 240
385
+ },
386
+ {
387
+ "epoch": 0.3052503052503053,
388
+ "grad_norm": 5.674577347175447,
389
+ "learning_rate": 4.958121083794216e-07,
390
+ "logits/chosen": -2.360697031021118,
391
+ "logits/rejected": -2.3762993812561035,
392
+ "logps/chosen": -573.5714111328125,
393
+ "logps/rejected": -768.0198364257812,
394
+ "loss": 0.3375,
395
+ "rewards/accuracies": 0.8343750238418579,
396
+ "rewards/chosen": -1.5868339538574219,
397
+ "rewards/margins": 2.1389665603637695,
398
+ "rewards/rejected": -3.7258007526397705,
399
+ "step": 250
400
+ },
401
+ {
402
+ "epoch": 0.31746031746031744,
403
+ "grad_norm": 6.3452788515852365,
404
+ "learning_rate": 4.947851526877681e-07,
405
+ "logits/chosen": -2.3265128135681152,
406
+ "logits/rejected": -2.354276180267334,
407
+ "logps/chosen": -530.7965087890625,
408
+ "logps/rejected": -724.9639282226562,
409
+ "loss": 0.3366,
410
+ "rewards/accuracies": 0.8125,
411
+ "rewards/chosen": -1.4949758052825928,
412
+ "rewards/margins": 2.002520799636841,
413
+ "rewards/rejected": -3.4974968433380127,
414
+ "step": 260
415
+ },
416
+ {
417
+ "epoch": 0.32967032967032966,
418
+ "grad_norm": 6.281289415201168,
419
+ "learning_rate": 4.936470050297798e-07,
420
+ "logits/chosen": -2.301875352859497,
421
+ "logits/rejected": -2.3351759910583496,
422
+ "logps/chosen": -551.602294921875,
423
+ "logps/rejected": -743.7047119140625,
424
+ "loss": 0.3241,
425
+ "rewards/accuracies": 0.8343750238418579,
426
+ "rewards/chosen": -1.5574215650558472,
427
+ "rewards/margins": 2.1133522987365723,
428
+ "rewards/rejected": -3.67077374458313,
429
+ "step": 270
430
+ },
431
+ {
432
+ "epoch": 0.3418803418803419,
433
+ "grad_norm": 6.620858897078139,
434
+ "learning_rate": 4.92398182401176e-07,
435
+ "logits/chosen": -2.3097803592681885,
436
+ "logits/rejected": -2.3185763359069824,
437
+ "logps/chosen": -573.0301513671875,
438
+ "logps/rejected": -778.9459228515625,
439
+ "loss": 0.3358,
440
+ "rewards/accuracies": 0.871874988079071,
441
+ "rewards/chosen": -1.798607587814331,
442
+ "rewards/margins": 2.2159924507141113,
443
+ "rewards/rejected": -4.0146002769470215,
444
+ "step": 280
445
+ },
446
+ {
447
+ "epoch": 0.3540903540903541,
448
+ "grad_norm": 6.930259744533563,
449
+ "learning_rate": 4.910392520710174e-07,
450
+ "logits/chosen": -2.301408529281616,
451
+ "logits/rejected": -2.3531241416931152,
452
+ "logps/chosen": -536.4937133789062,
453
+ "logps/rejected": -733.7653198242188,
454
+ "loss": 0.3233,
455
+ "rewards/accuracies": 0.846875011920929,
456
+ "rewards/chosen": -1.5255537033081055,
457
+ "rewards/margins": 2.0018668174743652,
458
+ "rewards/rejected": -3.52742075920105,
459
+ "step": 290
460
+ },
461
+ {
462
+ "epoch": 0.3663003663003663,
463
+ "grad_norm": 6.863827814496594,
464
+ "learning_rate": 4.895708313240285e-07,
465
+ "logits/chosen": -2.3684794902801514,
466
+ "logits/rejected": -2.3983054161071777,
467
+ "logps/chosen": -560.7042236328125,
468
+ "logps/rejected": -786.914794921875,
469
+ "loss": 0.3143,
470
+ "rewards/accuracies": 0.878125011920929,
471
+ "rewards/chosen": -1.5986013412475586,
472
+ "rewards/margins": 2.3771414756774902,
473
+ "rewards/rejected": -3.9757423400878906,
474
+ "step": 300
475
+ },
476
+ {
477
+ "epoch": 0.3785103785103785,
478
+ "grad_norm": 6.618762045262814,
479
+ "learning_rate": 4.879935871802001e-07,
480
+ "logits/chosen": -2.340313196182251,
481
+ "logits/rejected": -2.348755121231079,
482
+ "logps/chosen": -555.6011962890625,
483
+ "logps/rejected": -804.3768310546875,
484
+ "loss": 0.3199,
485
+ "rewards/accuracies": 0.875,
486
+ "rewards/chosen": -1.6778042316436768,
487
+ "rewards/margins": 2.592184543609619,
488
+ "rewards/rejected": -4.269989013671875,
489
+ "step": 310
490
+ },
491
+ {
492
+ "epoch": 0.3907203907203907,
493
+ "grad_norm": 6.104172520347164,
494
+ "learning_rate": 4.863082360917998e-07,
495
+ "logits/chosen": -2.300529956817627,
496
+ "logits/rejected": -2.3536949157714844,
497
+ "logps/chosen": -565.5293579101562,
498
+ "logps/rejected": -806.9231567382812,
499
+ "loss": 0.3146,
500
+ "rewards/accuracies": 0.856249988079071,
501
+ "rewards/chosen": -1.7859094142913818,
502
+ "rewards/margins": 2.411153793334961,
503
+ "rewards/rejected": -4.19706392288208,
504
+ "step": 320
505
+ },
506
+ {
507
+ "epoch": 0.40293040293040294,
508
+ "grad_norm": 13.362839259142513,
509
+ "learning_rate": 4.845155436179286e-07,
510
+ "logits/chosen": -2.2448084354400635,
511
+ "logits/rejected": -2.265852451324463,
512
+ "logps/chosen": -558.632080078125,
513
+ "logps/rejected": -792.7662963867188,
514
+ "loss": 0.2963,
515
+ "rewards/accuracies": 0.8687499761581421,
516
+ "rewards/chosen": -1.8193552494049072,
517
+ "rewards/margins": 2.476144790649414,
518
+ "rewards/rejected": -4.2955002784729,
519
+ "step": 330
520
+ },
521
+ {
522
+ "epoch": 0.41514041514041516,
523
+ "grad_norm": 8.193239382667205,
524
+ "learning_rate": 4.826163240767716e-07,
525
+ "logits/chosen": -2.2685139179229736,
526
+ "logits/rejected": -2.291156053543091,
527
+ "logps/chosen": -578.7394409179688,
528
+ "logps/rejected": -828.9251098632812,
529
+ "loss": 0.2845,
530
+ "rewards/accuracies": 0.909375011920929,
531
+ "rewards/chosen": -1.7241359949111938,
532
+ "rewards/margins": 2.655815601348877,
533
+ "rewards/rejected": -4.379951000213623,
534
+ "step": 340
535
+ },
536
+ {
537
+ "epoch": 0.42735042735042733,
538
+ "grad_norm": 11.339490692180462,
539
+ "learning_rate": 4.806114401756988e-07,
540
+ "logits/chosen": -2.278423309326172,
541
+ "logits/rejected": -2.281816244125366,
542
+ "logps/chosen": -609.6286010742188,
543
+ "logps/rejected": -881.9080810546875,
544
+ "loss": 0.3005,
545
+ "rewards/accuracies": 0.887499988079071,
546
+ "rewards/chosen": -2.0114948749542236,
547
+ "rewards/margins": 2.8581748008728027,
548
+ "rewards/rejected": -4.8696699142456055,
549
+ "step": 350
550
+ },
551
+ {
552
+ "epoch": 0.43956043956043955,
553
+ "grad_norm": 7.458240778141653,
554
+ "learning_rate": 4.785018026193862e-07,
555
+ "logits/chosen": -2.3067989349365234,
556
+ "logits/rejected": -2.302696943283081,
557
+ "logps/chosen": -579.6629638671875,
558
+ "logps/rejected": -773.7774658203125,
559
+ "loss": 0.3119,
560
+ "rewards/accuracies": 0.8687499761581421,
561
+ "rewards/chosen": -1.6714990139007568,
562
+ "rewards/margins": 2.2192506790161133,
563
+ "rewards/rejected": -3.890749454498291,
564
+ "step": 360
565
+ },
566
+ {
567
+ "epoch": 0.4517704517704518,
568
+ "grad_norm": 7.753160256439322,
569
+ "learning_rate": 4.762883696961353e-07,
570
+ "logits/chosen": -2.2718212604522705,
571
+ "logits/rejected": -2.275289535522461,
572
+ "logps/chosen": -606.1307373046875,
573
+ "logps/rejected": -799.2538452148438,
574
+ "loss": 0.2968,
575
+ "rewards/accuracies": 0.8687499761581421,
576
+ "rewards/chosen": -2.069481134414673,
577
+ "rewards/margins": 2.2473983764648438,
578
+ "rewards/rejected": -4.316879749298096,
579
+ "step": 370
580
+ },
581
+ {
582
+ "epoch": 0.463980463980464,
583
+ "grad_norm": 7.668480163136406,
584
+ "learning_rate": 4.739721468425763e-07,
585
+ "logits/chosen": -2.196530342102051,
586
+ "logits/rejected": -2.2193641662597656,
587
+ "logps/chosen": -598.3923950195312,
588
+ "logps/rejected": -832.3982543945312,
589
+ "loss": 0.2843,
590
+ "rewards/accuracies": 0.840624988079071,
591
+ "rewards/chosen": -2.077108383178711,
592
+ "rewards/margins": 2.470067024230957,
593
+ "rewards/rejected": -4.547175407409668,
594
+ "step": 380
595
+ },
596
+ {
597
+ "epoch": 0.47619047619047616,
598
+ "grad_norm": 7.295306779149404,
599
+ "learning_rate": 4.715541861869562e-07,
600
+ "logits/chosen": -2.260920763015747,
601
+ "logits/rejected": -2.2953712940216064,
602
+ "logps/chosen": -582.3689575195312,
603
+ "logps/rejected": -889.9528198242188,
604
+ "loss": 0.2779,
605
+ "rewards/accuracies": 0.8968750238418579,
606
+ "rewards/chosen": -1.7534271478652954,
607
+ "rewards/margins": 3.1180617809295654,
608
+ "rewards/rejected": -4.871488571166992,
609
+ "step": 390
610
+ },
611
+ {
612
+ "epoch": 0.4884004884004884,
613
+ "grad_norm": 8.066116525077385,
614
+ "learning_rate": 4.690355860712163e-07,
615
+ "logits/chosen": -2.222865581512451,
616
+ "logits/rejected": -2.278761386871338,
617
+ "logps/chosen": -597.5587158203125,
618
+ "logps/rejected": -863.9420776367188,
619
+ "loss": 0.2852,
620
+ "rewards/accuracies": 0.893750011920929,
621
+ "rewards/chosen": -1.9404022693634033,
622
+ "rewards/margins": 2.7501840591430664,
623
+ "rewards/rejected": -4.690586090087891,
624
+ "step": 400
625
+ },
626
+ {
627
+ "epoch": 0.5006105006105006,
628
+ "grad_norm": 7.116028285618711,
629
+ "learning_rate": 4.664174905520782e-07,
630
+ "logits/chosen": -2.1432394981384277,
631
+ "logits/rejected": -2.1721110343933105,
632
+ "logps/chosen": -567.879638671875,
633
+ "logps/rejected": -829.4773559570312,
634
+ "loss": 0.2967,
635
+ "rewards/accuracies": 0.846875011920929,
636
+ "rewards/chosen": -1.8987783193588257,
637
+ "rewards/margins": 2.688133716583252,
638
+ "rewards/rejected": -4.586911678314209,
639
+ "step": 410
640
+ },
641
+ {
642
+ "epoch": 0.5128205128205128,
643
+ "grad_norm": 7.04360607650549,
644
+ "learning_rate": 4.637010888813638e-07,
645
+ "logits/chosen": -2.182570695877075,
646
+ "logits/rejected": -2.1820969581604004,
647
+ "logps/chosen": -577.9962768554688,
648
+ "logps/rejected": -821.7763671875,
649
+ "loss": 0.2965,
650
+ "rewards/accuracies": 0.84375,
651
+ "rewards/chosen": -1.8270905017852783,
652
+ "rewards/margins": 2.6087398529052734,
653
+ "rewards/rejected": -4.435830116271973,
654
+ "step": 420
655
+ },
656
+ {
657
+ "epoch": 0.525030525030525,
658
+ "grad_norm": 8.266012865923875,
659
+ "learning_rate": 4.608876149657862e-07,
660
+ "logits/chosen": -2.1726815700531006,
661
+ "logits/rejected": -2.179741621017456,
662
+ "logps/chosen": -615.1981201171875,
663
+ "logps/rejected": -858.2872314453125,
664
+ "loss": 0.2806,
665
+ "rewards/accuracies": 0.9125000238418579,
666
+ "rewards/chosen": -2.058166980743408,
667
+ "rewards/margins": 2.7252743244171143,
668
+ "rewards/rejected": -4.783441066741943,
669
+ "step": 430
670
+ },
671
+ {
672
+ "epoch": 0.5372405372405372,
673
+ "grad_norm": 7.654156099725738,
674
+ "learning_rate": 4.5797834680645553e-07,
675
+ "logits/chosen": -2.149817943572998,
676
+ "logits/rejected": -2.1750922203063965,
677
+ "logps/chosen": -626.6456909179688,
678
+ "logps/rejected": -924.1580810546875,
679
+ "loss": 0.2855,
680
+ "rewards/accuracies": 0.887499988079071,
681
+ "rewards/chosen": -2.245823621749878,
682
+ "rewards/margins": 3.026350736618042,
683
+ "rewards/rejected": -5.27217435836792,
684
+ "step": 440
685
+ },
686
+ {
687
+ "epoch": 0.5494505494505495,
688
+ "grad_norm": 7.2185121675912445,
689
+ "learning_rate": 4.549746059183561e-07,
690
+ "logits/chosen": -2.134995460510254,
691
+ "logits/rejected": -2.1586225032806396,
692
+ "logps/chosen": -588.483642578125,
693
+ "logps/rejected": -902.3905029296875,
694
+ "loss": 0.2651,
695
+ "rewards/accuracies": 0.893750011920929,
696
+ "rewards/chosen": -2.05491304397583,
697
+ "rewards/margins": 3.2267088890075684,
698
+ "rewards/rejected": -5.281621932983398,
699
+ "step": 450
700
+ },
701
+ {
702
+ "epoch": 0.5616605616605617,
703
+ "grad_norm": 8.705469465511992,
704
+ "learning_rate": 4.5187775673005744e-07,
705
+ "logits/chosen": -2.0956122875213623,
706
+ "logits/rejected": -2.122889757156372,
707
+ "logps/chosen": -614.2383422851562,
708
+ "logps/rejected": -854.7008666992188,
709
+ "loss": 0.2706,
710
+ "rewards/accuracies": 0.875,
711
+ "rewards/chosen": -2.2875373363494873,
712
+ "rewards/margins": 2.5731403827667236,
713
+ "rewards/rejected": -4.860678195953369,
714
+ "step": 460
715
+ },
716
+ {
717
+ "epoch": 0.5738705738705738,
718
+ "grad_norm": 6.674714090077302,
719
+ "learning_rate": 4.4868920596393197e-07,
720
+ "logits/chosen": -2.122455596923828,
721
+ "logits/rejected": -2.1570394039154053,
722
+ "logps/chosen": -628.7379150390625,
723
+ "logps/rejected": -913.1719970703125,
724
+ "loss": 0.2635,
725
+ "rewards/accuracies": 0.8531249761581421,
726
+ "rewards/chosen": -2.345153331756592,
727
+ "rewards/margins": 2.962648630142212,
728
+ "rewards/rejected": -5.307801723480225,
729
+ "step": 470
730
+ },
731
+ {
732
+ "epoch": 0.5860805860805861,
733
+ "grad_norm": 10.484528534722092,
734
+ "learning_rate": 4.4541040199716063e-07,
735
+ "logits/chosen": -2.1315033435821533,
736
+ "logits/rejected": -2.133057117462158,
737
+ "logps/chosen": -652.1898803710938,
738
+ "logps/rejected": -918.3726806640625,
739
+ "loss": 0.2841,
740
+ "rewards/accuracies": 0.887499988079071,
741
+ "rewards/chosen": -2.442749261856079,
742
+ "rewards/margins": 2.9514050483703613,
743
+ "rewards/rejected": -5.3941545486450195,
744
+ "step": 480
745
+ },
746
+ {
747
+ "epoch": 0.5982905982905983,
748
+ "grad_norm": 6.764572032034108,
749
+ "learning_rate": 4.4204283420381827e-07,
750
+ "logits/chosen": -2.0328996181488037,
751
+ "logits/rejected": -2.0834460258483887,
752
+ "logps/chosen": -605.2128295898438,
753
+ "logps/rejected": -897.3580322265625,
754
+ "loss": 0.2652,
755
+ "rewards/accuracies": 0.871874988079071,
756
+ "rewards/chosen": -2.2144479751586914,
757
+ "rewards/margins": 2.9639675617218018,
758
+ "rewards/rejected": -5.178415775299072,
759
+ "step": 490
760
+ },
761
+ {
762
+ "epoch": 0.6105006105006106,
763
+ "grad_norm": 7.778145469907922,
764
+ "learning_rate": 4.3858803227833526e-07,
765
+ "logits/chosen": -1.9765386581420898,
766
+ "logits/rejected": -2.015956401824951,
767
+ "logps/chosen": -608.0709838867188,
768
+ "logps/rejected": -899.2232666015625,
769
+ "loss": 0.2715,
770
+ "rewards/accuracies": 0.90625,
771
+ "rewards/chosen": -2.045722484588623,
772
+ "rewards/margins": 3.0392613410949707,
773
+ "rewards/rejected": -5.084983825683594,
774
+ "step": 500
775
+ },
776
+ {
777
+ "epoch": 0.6227106227106227,
778
+ "grad_norm": 7.943002170259063,
779
+ "learning_rate": 4.350475655406445e-07,
780
+ "logits/chosen": -2.0267040729522705,
781
+ "logits/rejected": -2.0642428398132324,
782
+ "logps/chosen": -634.7288208007812,
783
+ "logps/rejected": -928.9196166992188,
784
+ "loss": 0.2677,
785
+ "rewards/accuracies": 0.875,
786
+ "rewards/chosen": -2.3875765800476074,
787
+ "rewards/margins": 3.001603126525879,
788
+ "rewards/rejected": -5.3891801834106445,
789
+ "step": 510
790
+ },
791
+ {
792
+ "epoch": 0.6349206349206349,
793
+ "grad_norm": 8.177071581241291,
794
+ "learning_rate": 4.314230422233286e-07,
795
+ "logits/chosen": -1.9996439218521118,
796
+ "logits/rejected": -2.001002788543701,
797
+ "logps/chosen": -606.2533569335938,
798
+ "logps/rejected": -888.8444213867188,
799
+ "loss": 0.2508,
800
+ "rewards/accuracies": 0.875,
801
+ "rewards/chosen": -2.237772226333618,
802
+ "rewards/margins": 2.8773951530456543,
803
+ "rewards/rejected": -5.115168571472168,
804
+ "step": 520
805
+ },
806
+ {
807
+ "epoch": 0.6471306471306472,
808
+ "grad_norm": 7.600982395285539,
809
+ "learning_rate": 4.2771610874109166e-07,
810
+ "logits/chosen": -1.9806162118911743,
811
+ "logits/rejected": -2.036025285720825,
812
+ "logps/chosen": -653.9401245117188,
813
+ "logps/rejected": -974.8654174804688,
814
+ "loss": 0.2529,
815
+ "rewards/accuracies": 0.90625,
816
+ "rewards/chosen": -2.513798236846924,
817
+ "rewards/margins": 3.2604928016662598,
818
+ "rewards/rejected": -5.774291038513184,
819
+ "step": 530
820
+ },
821
+ {
822
+ "epoch": 0.6593406593406593,
823
+ "grad_norm": 9.082170828519892,
824
+ "learning_rate": 4.2392844894288605e-07,
825
+ "logits/chosen": -1.9804267883300781,
826
+ "logits/rejected": -1.9906890392303467,
827
+ "logps/chosen": -633.1802368164062,
828
+ "logps/rejected": -931.8502197265625,
829
+ "loss": 0.2571,
830
+ "rewards/accuracies": 0.921875,
831
+ "rewards/chosen": -2.3836662769317627,
832
+ "rewards/margins": 3.0242981910705566,
833
+ "rewards/rejected": -5.40796422958374,
834
+ "step": 540
835
+ },
836
+ {
837
+ "epoch": 0.6715506715506715,
838
+ "grad_norm": 7.837053802282412,
839
+ "learning_rate": 4.2006178334703636e-07,
840
+ "logits/chosen": -2.006538152694702,
841
+ "logits/rejected": -2.0499379634857178,
842
+ "logps/chosen": -650.6314697265625,
843
+ "logps/rejected": -932.8741455078125,
844
+ "loss": 0.2636,
845
+ "rewards/accuracies": 0.8656250238418579,
846
+ "rewards/chosen": -2.4409451484680176,
847
+ "rewards/margins": 3.0085086822509766,
848
+ "rewards/rejected": -5.449453830718994,
849
+ "step": 550
850
+ },
851
+ {
852
+ "epoch": 0.6837606837606838,
853
+ "grad_norm": 6.941390377752381,
854
+ "learning_rate": 4.161178683597054e-07,
855
+ "logits/chosen": -2.0168557167053223,
856
+ "logits/rejected": -1.9983936548233032,
857
+ "logps/chosen": -596.7725830078125,
858
+ "logps/rejected": -862.6604614257812,
859
+ "loss": 0.2596,
860
+ "rewards/accuracies": 0.8812500238418579,
861
+ "rewards/chosen": -2.015202283859253,
862
+ "rewards/margins": 2.8551807403564453,
863
+ "rewards/rejected": -4.870382785797119,
864
+ "step": 560
865
+ },
866
+ {
867
+ "epoch": 0.6959706959706959,
868
+ "grad_norm": 9.116023944385924,
869
+ "learning_rate": 4.1209849547705916e-07,
870
+ "logits/chosen": -1.9668136835098267,
871
+ "logits/rejected": -1.9564120769500732,
872
+ "logps/chosen": -597.05419921875,
873
+ "logps/rejected": -889.5735473632812,
874
+ "loss": 0.2673,
875
+ "rewards/accuracies": 0.9312499761581421,
876
+ "rewards/chosen": -1.908453345298767,
877
+ "rewards/margins": 3.2164673805236816,
878
+ "rewards/rejected": -5.124920845031738,
879
+ "step": 570
880
+ },
881
+ {
882
+ "epoch": 0.7081807081807082,
883
+ "grad_norm": 9.73618947224433,
884
+ "learning_rate": 4.080054904714917e-07,
885
+ "logits/chosen": -2.0258185863494873,
886
+ "logits/rejected": -2.048612594604492,
887
+ "logps/chosen": -642.0574951171875,
888
+ "logps/rejected": -941.4654541015625,
889
+ "loss": 0.2523,
890
+ "rewards/accuracies": 0.893750011920929,
891
+ "rewards/chosen": -2.526458263397217,
892
+ "rewards/margins": 3.056541919708252,
893
+ "rewards/rejected": -5.583000183105469,
894
+ "step": 580
895
+ },
896
+ {
897
+ "epoch": 0.7203907203907204,
898
+ "grad_norm": 8.420821275679248,
899
+ "learning_rate": 4.038407125622806e-07,
900
+ "logits/chosen": -1.9966434240341187,
901
+ "logits/rejected": -1.9993374347686768,
902
+ "logps/chosen": -637.5452880859375,
903
+ "logps/rejected": -916.9855346679688,
904
+ "loss": 0.2664,
905
+ "rewards/accuracies": 0.8812500238418579,
906
+ "rewards/chosen": -2.3783979415893555,
907
+ "rewards/margins": 3.0379176139831543,
908
+ "rewards/rejected": -5.41631555557251,
909
+ "step": 590
910
+ },
911
+ {
912
+ "epoch": 0.7326007326007326,
913
+ "grad_norm": 11.758932811938493,
914
+ "learning_rate": 3.9960605357105e-07,
915
+ "logits/chosen": -2.070080041885376,
916
+ "logits/rejected": -2.069638729095459,
917
+ "logps/chosen": -621.4031372070312,
918
+ "logps/rejected": -907.4441528320312,
919
+ "loss": 0.2693,
920
+ "rewards/accuracies": 0.8843749761581421,
921
+ "rewards/chosen": -2.1775975227355957,
922
+ "rewards/margins": 3.1478095054626465,
923
+ "rewards/rejected": -5.3254075050354,
924
+ "step": 600
925
+ },
926
+ {
927
+ "epoch": 0.7448107448107448,
928
+ "grad_norm": 7.854542997142867,
929
+ "learning_rate": 3.95303437062423e-07,
930
+ "logits/chosen": -1.9713811874389648,
931
+ "logits/rejected": -1.977330207824707,
932
+ "logps/chosen": -642.8685913085938,
933
+ "logps/rejected": -944.4246826171875,
934
+ "loss": 0.2427,
935
+ "rewards/accuracies": 0.909375011920929,
936
+ "rewards/chosen": -2.425691604614258,
937
+ "rewards/margins": 3.1849162578582764,
938
+ "rewards/rejected": -5.610608100891113,
939
+ "step": 610
940
+ },
941
+ {
942
+ "epoch": 0.757020757020757,
943
+ "grad_norm": 7.703815141377651,
944
+ "learning_rate": 3.9093481747025615e-07,
945
+ "logits/chosen": -1.9850308895111084,
946
+ "logits/rejected": -1.9987919330596924,
947
+ "logps/chosen": -668.91943359375,
948
+ "logps/rejected": -992.9368896484375,
949
+ "loss": 0.2573,
950
+ "rewards/accuracies": 0.8968750238418579,
951
+ "rewards/chosen": -2.7951459884643555,
952
+ "rewards/margins": 3.3580493927001953,
953
+ "rewards/rejected": -6.153195381164551,
954
+ "step": 620
955
+ },
956
+ {
957
+ "epoch": 0.7692307692307693,
958
+ "grad_norm": 8.680106719787165,
959
+ "learning_rate": 3.86502179209851e-07,
960
+ "logits/chosen": -1.9663703441619873,
961
+ "logits/rejected": -1.9948896169662476,
962
+ "logps/chosen": -610.5440673828125,
963
+ "logps/rejected": -898.1494140625,
964
+ "loss": 0.2614,
965
+ "rewards/accuracies": 0.8812500238418579,
966
+ "rewards/chosen": -2.289752960205078,
967
+ "rewards/margins": 2.931398868560791,
968
+ "rewards/rejected": -5.221152305603027,
969
+ "step": 630
970
+ },
971
+ {
972
+ "epoch": 0.7814407814407814,
973
+ "grad_norm": 10.429529355264094,
974
+ "learning_rate": 3.8200753577654765e-07,
975
+ "logits/chosen": -1.944758653640747,
976
+ "logits/rejected": -1.981778860092163,
977
+ "logps/chosen": -658.198486328125,
978
+ "logps/rejected": -976.310546875,
979
+ "loss": 0.252,
980
+ "rewards/accuracies": 0.8999999761581421,
981
+ "rewards/chosen": -2.5240378379821777,
982
+ "rewards/margins": 3.3471717834472656,
983
+ "rewards/rejected": -5.871209621429443,
984
+ "step": 640
985
+ },
986
+ {
987
+ "epoch": 0.7936507936507936,
988
+ "grad_norm": 9.854306625880072,
989
+ "learning_rate": 3.7745292883110784e-07,
990
+ "logits/chosen": -1.9690608978271484,
991
+ "logits/rejected": -1.9564428329467773,
992
+ "logps/chosen": -661.0819091796875,
993
+ "logps/rejected": -980.1275634765625,
994
+ "loss": 0.2457,
995
+ "rewards/accuracies": 0.921875,
996
+ "rewards/chosen": -2.6281650066375732,
997
+ "rewards/margins": 3.3420677185058594,
998
+ "rewards/rejected": -5.970232963562012,
999
+ "step": 650
1000
+ },
1001
+ {
1002
+ "epoch": 0.8058608058608059,
1003
+ "grad_norm": 10.023665069599845,
1004
+ "learning_rate": 3.7284042727230506e-07,
1005
+ "logits/chosen": -1.9493147134780884,
1006
+ "logits/rejected": -1.9569787979125977,
1007
+ "logps/chosen": -632.8013916015625,
1008
+ "logps/rejected": -883.5313720703125,
1009
+ "loss": 0.2535,
1010
+ "rewards/accuracies": 0.8968750238418579,
1011
+ "rewards/chosen": -2.2697696685791016,
1012
+ "rewards/margins": 2.83363676071167,
1013
+ "rewards/rejected": -5.1034064292907715,
1014
+ "step": 660
1015
+ },
1016
+ {
1017
+ "epoch": 0.818070818070818,
1018
+ "grad_norm": 8.667472255038128,
1019
+ "learning_rate": 3.681721262971413e-07,
1020
+ "logits/chosen": -2.0272722244262695,
1021
+ "logits/rejected": -2.039168119430542,
1022
+ "logps/chosen": -667.6822509765625,
1023
+ "logps/rejected": -953.4684448242188,
1024
+ "loss": 0.2344,
1025
+ "rewards/accuracies": 0.871874988079071,
1026
+ "rewards/chosen": -2.6374218463897705,
1027
+ "rewards/margins": 3.003962516784668,
1028
+ "rewards/rejected": -5.641384601593018,
1029
+ "step": 670
1030
+ },
1031
+ {
1032
+ "epoch": 0.8302808302808303,
1033
+ "grad_norm": 11.27138748410362,
1034
+ "learning_rate": 3.634501464491183e-07,
1035
+ "logits/chosen": -2.0476953983306885,
1036
+ "logits/rejected": -2.064707040786743,
1037
+ "logps/chosen": -683.5264282226562,
1038
+ "logps/rejected": -1021.5324096679688,
1039
+ "loss": 0.233,
1040
+ "rewards/accuracies": 0.9156249761581421,
1041
+ "rewards/chosen": -2.805328845977783,
1042
+ "rewards/margins": 3.454031467437744,
1043
+ "rewards/rejected": -6.259359836578369,
1044
+ "step": 680
1045
+ },
1046
+ {
1047
+ "epoch": 0.8424908424908425,
1048
+ "grad_norm": 10.056725367187733,
1049
+ "learning_rate": 3.5867663265499553e-07,
1050
+ "logits/chosen": -1.9634895324707031,
1051
+ "logits/rejected": -1.978262186050415,
1052
+ "logps/chosen": -688.0383911132812,
1053
+ "logps/rejected": -1032.1566162109375,
1054
+ "loss": 0.2429,
1055
+ "rewards/accuracies": 0.90625,
1056
+ "rewards/chosen": -2.8448052406311035,
1057
+ "rewards/margins": 3.5894615650177,
1058
+ "rewards/rejected": -6.434267520904541,
1059
+ "step": 690
1060
+ },
1061
+ {
1062
+ "epoch": 0.8547008547008547,
1063
+ "grad_norm": 10.779176958531542,
1064
+ "learning_rate": 3.5385375325047163e-07,
1065
+ "logits/chosen": -1.9880058765411377,
1066
+ "logits/rejected": -1.948150396347046,
1067
+ "logps/chosen": -678.7589111328125,
1068
+ "logps/rejected": -950.1603393554688,
1069
+ "loss": 0.2498,
1070
+ "rewards/accuracies": 0.875,
1071
+ "rewards/chosen": -2.7791597843170166,
1072
+ "rewards/margins": 2.923888921737671,
1073
+ "rewards/rejected": -5.7030487060546875,
1074
+ "step": 700
1075
+ },
1076
+ {
1077
+ "epoch": 0.8669108669108669,
1078
+ "grad_norm": 7.2521850880280665,
1079
+ "learning_rate": 3.4898369899523323e-07,
1080
+ "logits/chosen": -1.9596385955810547,
1081
+ "logits/rejected": -1.9468225240707397,
1082
+ "logps/chosen": -670.3238525390625,
1083
+ "logps/rejected": -978.1044921875,
1084
+ "loss": 0.245,
1085
+ "rewards/accuracies": 0.934374988079071,
1086
+ "rewards/chosen": -2.5701966285705566,
1087
+ "rewards/margins": 3.3597824573516846,
1088
+ "rewards/rejected": -5.92997932434082,
1089
+ "step": 710
1090
+ },
1091
+ {
1092
+ "epoch": 0.8791208791208791,
1093
+ "grad_norm": 8.740369578606884,
1094
+ "learning_rate": 3.4406868207781725e-07,
1095
+ "logits/chosen": -1.9431158304214478,
1096
+ "logits/rejected": -1.9668858051300049,
1097
+ "logps/chosen": -652.6699829101562,
1098
+ "logps/rejected": -978.8201293945312,
1099
+ "loss": 0.252,
1100
+ "rewards/accuracies": 0.8968750238418579,
1101
+ "rewards/chosen": -2.550971746444702,
1102
+ "rewards/margins": 3.3800606727600098,
1103
+ "rewards/rejected": -5.931032180786133,
1104
+ "step": 720
1105
+ },
1106
+ {
1107
+ "epoch": 0.8913308913308914,
1108
+ "grad_norm": 8.917488139539987,
1109
+ "learning_rate": 3.3911093511073984e-07,
1110
+ "logits/chosen": -1.9911832809448242,
1111
+ "logits/rejected": -2.0098257064819336,
1112
+ "logps/chosen": -659.505859375,
1113
+ "logps/rejected": -971.2593994140625,
1114
+ "loss": 0.2467,
1115
+ "rewards/accuracies": 0.909375011920929,
1116
+ "rewards/chosen": -2.533627986907959,
1117
+ "rewards/margins": 3.233253002166748,
1118
+ "rewards/rejected": -5.766880989074707,
1119
+ "step": 730
1120
+ },
1121
+ {
1122
+ "epoch": 0.9035409035409036,
1123
+ "grad_norm": 10.10992640831635,
1124
+ "learning_rate": 3.3411271011634697e-07,
1125
+ "logits/chosen": -1.8714303970336914,
1126
+ "logits/rejected": -1.8533025979995728,
1127
+ "logps/chosen": -610.3406372070312,
1128
+ "logps/rejected": -920.5524291992188,
1129
+ "loss": 0.2487,
1130
+ "rewards/accuracies": 0.9281250238418579,
1131
+ "rewards/chosen": -2.253237247467041,
1132
+ "rewards/margins": 3.4128048419952393,
1133
+ "rewards/rejected": -5.666041374206543,
1134
+ "step": 740
1135
+ },
1136
+ {
1137
+ "epoch": 0.9157509157509157,
1138
+ "grad_norm": 10.16349038435686,
1139
+ "learning_rate": 3.290762775038494e-07,
1140
+ "logits/chosen": -1.8609752655029297,
1141
+ "logits/rejected": -1.8657268285751343,
1142
+ "logps/chosen": -650.5752563476562,
1143
+ "logps/rejected": -990.2158203125,
1144
+ "loss": 0.2209,
1145
+ "rewards/accuracies": 0.918749988079071,
1146
+ "rewards/chosen": -2.499966859817505,
1147
+ "rewards/margins": 3.5357258319854736,
1148
+ "rewards/rejected": -6.0356926918029785,
1149
+ "step": 750
1150
+ },
1151
+ {
1152
+ "epoch": 0.927960927960928,
1153
+ "grad_norm": 7.957411909596094,
1154
+ "learning_rate": 3.2400392503800477e-07,
1155
+ "logits/chosen": -1.904550313949585,
1156
+ "logits/rejected": -1.8870503902435303,
1157
+ "logps/chosen": -671.4937744140625,
1158
+ "logps/rejected": -1035.06005859375,
1159
+ "loss": 0.2278,
1160
+ "rewards/accuracies": 0.925000011920929,
1161
+ "rewards/chosen": -2.7466225624084473,
1162
+ "rewards/margins": 3.689298629760742,
1163
+ "rewards/rejected": -6.435920715332031,
1164
+ "step": 760
1165
+ },
1166
+ {
1167
+ "epoch": 0.9401709401709402,
1168
+ "grad_norm": 10.052468847497332,
1169
+ "learning_rate": 3.188979567999161e-07,
1170
+ "logits/chosen": -1.846875786781311,
1171
+ "logits/rejected": -1.8232959508895874,
1172
+ "logps/chosen": -650.9820556640625,
1173
+ "logps/rejected": -956.2470703125,
1174
+ "loss": 0.2315,
1175
+ "rewards/accuracies": 0.90625,
1176
+ "rewards/chosen": -2.5622196197509766,
1177
+ "rewards/margins": 3.281222105026245,
1178
+ "rewards/rejected": -5.843441963195801,
1179
+ "step": 770
1180
+ },
1181
+ {
1182
+ "epoch": 0.9523809523809523,
1183
+ "grad_norm": 8.845964368891687,
1184
+ "learning_rate": 3.137606921404191e-07,
1185
+ "logits/chosen": -1.8606138229370117,
1186
+ "logits/rejected": -1.8387963771820068,
1187
+ "logps/chosen": -645.6654663085938,
1188
+ "logps/rejected": -939.6808471679688,
1189
+ "loss": 0.2448,
1190
+ "rewards/accuracies": 0.909375011920929,
1191
+ "rewards/chosen": -2.5100626945495605,
1192
+ "rewards/margins": 3.1421546936035156,
1193
+ "rewards/rejected": -5.652216911315918,
1194
+ "step": 780
1195
+ },
1196
+ {
1197
+ "epoch": 0.9645909645909646,
1198
+ "grad_norm": 8.337030706793822,
1199
+ "learning_rate": 3.0859446462653273e-07,
1200
+ "logits/chosen": -1.8315823078155518,
1201
+ "logits/rejected": -1.8353532552719116,
1202
+ "logps/chosen": -647.4951782226562,
1203
+ "logps/rejected": -976.1868286132812,
1204
+ "loss": 0.2333,
1205
+ "rewards/accuracies": 0.921875,
1206
+ "rewards/chosen": -2.6538143157958984,
1207
+ "rewards/margins": 3.3457908630371094,
1208
+ "rewards/rejected": -5.999605178833008,
1209
+ "step": 790
1210
+ },
1211
+ {
1212
+ "epoch": 0.9768009768009768,
1213
+ "grad_norm": 12.589579709309238,
1214
+ "learning_rate": 3.034016209814529e-07,
1215
+ "logits/chosen": -1.7978140115737915,
1216
+ "logits/rejected": -1.8211300373077393,
1217
+ "logps/chosen": -673.1063232421875,
1218
+ "logps/rejected": -1035.5140380859375,
1219
+ "loss": 0.2326,
1220
+ "rewards/accuracies": 0.90625,
1221
+ "rewards/chosen": -2.7855618000030518,
1222
+ "rewards/margins": 3.614232301712036,
1223
+ "rewards/rejected": -6.399794101715088,
1224
+ "step": 800
1225
+ },
1226
+ {
1227
+ "epoch": 0.989010989010989,
1228
+ "grad_norm": 8.693339105078397,
1229
+ "learning_rate": 2.9818452001856926e-07,
1230
+ "logits/chosen": -1.937139868736267,
1231
+ "logits/rejected": -1.9104955196380615,
1232
+ "logps/chosen": -709.281982421875,
1233
+ "logps/rejected": -1029.5799560546875,
1234
+ "loss": 0.2363,
1235
+ "rewards/accuracies": 0.8968750238418579,
1236
+ "rewards/chosen": -2.88238263130188,
1237
+ "rewards/margins": 3.4413890838623047,
1238
+ "rewards/rejected": -6.3237714767456055,
1239
+ "step": 810
1240
+ },
1241
+ {
1242
+ "epoch": 1.0012210012210012,
1243
+ "grad_norm": 7.149313221308795,
1244
+ "learning_rate": 2.929455315699908e-07,
1245
+ "logits/chosen": -1.8502495288848877,
1246
+ "logits/rejected": -1.8472564220428467,
1247
+ "logps/chosen": -633.6524658203125,
1248
+ "logps/rejected": -995.9943237304688,
1249
+ "loss": 0.2329,
1250
+ "rewards/accuracies": 0.8968750238418579,
1251
+ "rewards/chosen": -2.5007681846618652,
1252
+ "rewards/margins": 3.637205123901367,
1253
+ "rewards/rejected": -6.137973308563232,
1254
+ "step": 820
1255
+ },
1256
+ {
1257
+ "epoch": 1.0134310134310134,
1258
+ "grad_norm": 10.404080063241945,
1259
+ "learning_rate": 2.8768703541006574e-07,
1260
+ "logits/chosen": -1.9154850244522095,
1261
+ "logits/rejected": -1.9522449970245361,
1262
+ "logps/chosen": -641.5855712890625,
1263
+ "logps/rejected": -996.3082275390625,
1264
+ "loss": 0.2282,
1265
+ "rewards/accuracies": 0.903124988079071,
1266
+ "rewards/chosen": -2.4940600395202637,
1267
+ "rewards/margins": 3.6092581748962402,
1268
+ "rewards/rejected": -6.103318214416504,
1269
+ "step": 830
1270
+ },
1271
+ {
1272
+ "epoch": 1.0256410256410255,
1273
+ "grad_norm": 9.587186254816348,
1274
+ "learning_rate": 2.8241142017438557e-07,
1275
+ "logits/chosen": -1.873044729232788,
1276
+ "logits/rejected": -1.8523098230361938,
1277
+ "logps/chosen": -667.5242309570312,
1278
+ "logps/rejected": -989.6135864257812,
1279
+ "loss": 0.2229,
1280
+ "rewards/accuracies": 0.90625,
1281
+ "rewards/chosen": -2.665759563446045,
1282
+ "rewards/margins": 3.5281364917755127,
1283
+ "rewards/rejected": -6.193896293640137,
1284
+ "step": 840
1285
+ },
1286
+ {
1287
+ "epoch": 1.037851037851038,
1288
+ "grad_norm": 9.366883940315772,
1289
+ "learning_rate": 2.771210822747639e-07,
1290
+ "logits/chosen": -1.9449422359466553,
1291
+ "logits/rejected": -1.8745979070663452,
1292
+ "logps/chosen": -703.3695068359375,
1293
+ "logps/rejected": -1020.8590087890625,
1294
+ "loss": 0.2436,
1295
+ "rewards/accuracies": 0.8999999761581421,
1296
+ "rewards/chosen": -2.8462772369384766,
1297
+ "rewards/margins": 3.382256031036377,
1298
+ "rewards/rejected": -6.2285332679748535,
1299
+ "step": 850
1300
+ },
1301
+ {
1302
+ "epoch": 1.05006105006105,
1303
+ "grad_norm": 7.56593773353816,
1304
+ "learning_rate": 2.718184248106828e-07,
1305
+ "logits/chosen": -1.834773063659668,
1306
+ "logits/rejected": -1.8442004919052124,
1307
+ "logps/chosen": -658.0390014648438,
1308
+ "logps/rejected": -1019.7131958007812,
1309
+ "loss": 0.2224,
1310
+ "rewards/accuracies": 0.9312499761581421,
1311
+ "rewards/chosen": -2.64483380317688,
1312
+ "rewards/margins": 3.6114120483398438,
1313
+ "rewards/rejected": -6.256246089935303,
1314
+ "step": 860
1315
+ },
1316
+ {
1317
+ "epoch": 1.0622710622710623,
1318
+ "grad_norm": 8.841079507264354,
1319
+ "learning_rate": 2.665058564777014e-07,
1320
+ "logits/chosen": -1.8463929891586304,
1321
+ "logits/rejected": -1.839784026145935,
1322
+ "logps/chosen": -652.1268920898438,
1323
+ "logps/rejected": -1015.1273193359375,
1324
+ "loss": 0.2131,
1325
+ "rewards/accuracies": 0.9156249761581421,
1326
+ "rewards/chosen": -2.7053442001342773,
1327
+ "rewards/margins": 3.6853573322296143,
1328
+ "rewards/rejected": -6.3907012939453125,
1329
+ "step": 870
1330
+ },
1331
+ {
1332
+ "epoch": 1.0744810744810744,
1333
+ "grad_norm": 9.314872473095937,
1334
+ "learning_rate": 2.611857904733227e-07,
1335
+ "logits/chosen": -1.7795578241348267,
1336
+ "logits/rejected": -1.7798683643341064,
1337
+ "logps/chosen": -688.0113525390625,
1338
+ "logps/rejected": -1035.625244140625,
1339
+ "loss": 0.2242,
1340
+ "rewards/accuracies": 0.909375011920929,
1341
+ "rewards/chosen": -2.99015474319458,
1342
+ "rewards/margins": 3.6745171546936035,
1343
+ "rewards/rejected": -6.664670467376709,
1344
+ "step": 880
1345
+ },
1346
+ {
1347
+ "epoch": 1.0866910866910866,
1348
+ "grad_norm": 8.334325457508486,
1349
+ "learning_rate": 2.5586064340081516e-07,
1350
+ "logits/chosen": -1.8280951976776123,
1351
+ "logits/rejected": -1.8405084609985352,
1352
+ "logps/chosen": -666.4359741210938,
1353
+ "logps/rejected": -1007.44091796875,
1354
+ "loss": 0.23,
1355
+ "rewards/accuracies": 0.9312499761581421,
1356
+ "rewards/chosen": -2.6223227977752686,
1357
+ "rewards/margins": 3.5808663368225098,
1358
+ "rewards/rejected": -6.203189373016357,
1359
+ "step": 890
1360
+ },
1361
+ {
1362
+ "epoch": 1.098901098901099,
1363
+ "grad_norm": 7.206748373825152,
1364
+ "learning_rate": 2.505328341714873e-07,
1365
+ "logits/chosen": -1.7787061929702759,
1366
+ "logits/rejected": -1.8012263774871826,
1367
+ "logps/chosen": -663.2637329101562,
1368
+ "logps/rejected": -1036.8768310546875,
1369
+ "loss": 0.2129,
1370
+ "rewards/accuracies": 0.9375,
1371
+ "rewards/chosen": -2.8313677310943604,
1372
+ "rewards/margins": 3.7177510261535645,
1373
+ "rewards/rejected": -6.5491180419921875,
1374
+ "step": 900
1375
+ },
1376
+ {
1377
+ "epoch": 1.1111111111111112,
1378
+ "grad_norm": 10.177149465840422,
1379
+ "learning_rate": 2.4520478290591416e-07,
1380
+ "logits/chosen": -1.7504581212997437,
1381
+ "logits/rejected": -1.7739044427871704,
1382
+ "logps/chosen": -702.8826904296875,
1383
+ "logps/rejected": -1059.865234375,
1384
+ "loss": 0.2268,
1385
+ "rewards/accuracies": 0.9156249761581421,
1386
+ "rewards/chosen": -2.962860584259033,
1387
+ "rewards/margins": 3.6696648597717285,
1388
+ "rewards/rejected": -6.6325249671936035,
1389
+ "step": 910
1390
+ },
1391
+ {
1392
+ "epoch": 1.1233211233211233,
1393
+ "grad_norm": 8.632975577344476,
1394
+ "learning_rate": 2.3987890983461403e-07,
1395
+ "logits/chosen": -1.7676483392715454,
1396
+ "logits/rejected": -1.7390410900115967,
1397
+ "logps/chosen": -696.1847534179688,
1398
+ "logps/rejected": -1042.147216796875,
1399
+ "loss": 0.2198,
1400
+ "rewards/accuracies": 0.90625,
1401
+ "rewards/chosen": -2.94394850730896,
1402
+ "rewards/margins": 3.708522319793701,
1403
+ "rewards/rejected": -6.652470588684082,
1404
+ "step": 920
1405
+ },
1406
+ {
1407
+ "epoch": 1.1355311355311355,
1408
+ "grad_norm": 9.916019501072466,
1409
+ "learning_rate": 2.3455763419867544e-07,
1410
+ "logits/chosen": -1.7698522806167603,
1411
+ "logits/rejected": -1.8097903728485107,
1412
+ "logps/chosen": -661.6279907226562,
1413
+ "logps/rejected": -993.2236328125,
1414
+ "loss": 0.2322,
1415
+ "rewards/accuracies": 0.8843749761581421,
1416
+ "rewards/chosen": -2.7180135250091553,
1417
+ "rewards/margins": 3.294512987136841,
1418
+ "rewards/rejected": -6.012526512145996,
1419
+ "step": 930
1420
+ },
1421
+ {
1422
+ "epoch": 1.1477411477411477,
1423
+ "grad_norm": 8.842361688694279,
1424
+ "learning_rate": 2.2924337315083353e-07,
1425
+ "logits/chosen": -1.782261610031128,
1426
+ "logits/rejected": -1.8005173206329346,
1427
+ "logps/chosen": -626.9920043945312,
1428
+ "logps/rejected": -989.1697387695312,
1429
+ "loss": 0.2168,
1430
+ "rewards/accuracies": 0.925000011920929,
1431
+ "rewards/chosen": -2.3607215881347656,
1432
+ "rewards/margins": 3.6655585765838623,
1433
+ "rewards/rejected": -6.026279926300049,
1434
+ "step": 940
1435
+ },
1436
+ {
1437
+ "epoch": 1.1599511599511598,
1438
+ "grad_norm": 9.32555579290425,
1439
+ "learning_rate": 2.239385406574955e-07,
1440
+ "logits/chosen": -1.7827867269515991,
1441
+ "logits/rejected": -1.779762625694275,
1442
+ "logps/chosen": -655.2774047851562,
1443
+ "logps/rejected": -1022.1461791992188,
1444
+ "loss": 0.2233,
1445
+ "rewards/accuracies": 0.940625011920929,
1446
+ "rewards/chosen": -2.592012405395508,
1447
+ "rewards/margins": 3.816871166229248,
1448
+ "rewards/rejected": -6.408883571624756,
1449
+ "step": 950
1450
+ },
1451
+ {
1452
+ "epoch": 1.1721611721611722,
1453
+ "grad_norm": 10.939107380292038,
1454
+ "learning_rate": 2.1864554640221244e-07,
1455
+ "logits/chosen": -1.7000805139541626,
1456
+ "logits/rejected": -1.688855767250061,
1457
+ "logps/chosen": -676.7574462890625,
1458
+ "logps/rejected": -1019.0291748046875,
1459
+ "loss": 0.2202,
1460
+ "rewards/accuracies": 0.9156249761581421,
1461
+ "rewards/chosen": -2.7764394283294678,
1462
+ "rewards/margins": 3.648998260498047,
1463
+ "rewards/rejected": -6.425436973571777,
1464
+ "step": 960
1465
+ },
1466
+ {
1467
+ "epoch": 1.1843711843711844,
1468
+ "grad_norm": 10.161202425220864,
1469
+ "learning_rate": 2.133667946910977e-07,
1470
+ "logits/chosen": -1.812464714050293,
1471
+ "logits/rejected": -1.8180309534072876,
1472
+ "logps/chosen": -680.7279052734375,
1473
+ "logps/rejected": -1037.299560546875,
1474
+ "loss": 0.2178,
1475
+ "rewards/accuracies": 0.90625,
1476
+ "rewards/chosen": -2.6717233657836914,
1477
+ "rewards/margins": 3.6986587047576904,
1478
+ "rewards/rejected": -6.370382308959961,
1479
+ "step": 970
1480
+ },
1481
+ {
1482
+ "epoch": 1.1965811965811965,
1483
+ "grad_norm": 9.089009127660146,
1484
+ "learning_rate": 2.0810468336068697e-07,
1485
+ "logits/chosen": -1.752598524093628,
1486
+ "logits/rejected": -1.7276668548583984,
1487
+ "logps/chosen": -658.7349243164062,
1488
+ "logps/rejected": -977.19580078125,
1489
+ "loss": 0.2208,
1490
+ "rewards/accuracies": 0.90625,
1491
+ "rewards/chosen": -2.660342216491699,
1492
+ "rewards/margins": 3.40667462348938,
1493
+ "rewards/rejected": -6.0670166015625,
1494
+ "step": 980
1495
+ },
1496
+ {
1497
+ "epoch": 1.2087912087912087,
1498
+ "grad_norm": 8.82346746989032,
1499
+ "learning_rate": 2.0286160268873826e-07,
1500
+ "logits/chosen": -1.7947756052017212,
1501
+ "logits/rejected": -1.7663764953613281,
1502
+ "logps/chosen": -677.0531005859375,
1503
+ "logps/rejected": -991.953125,
1504
+ "loss": 0.2212,
1505
+ "rewards/accuracies": 0.903124988079071,
1506
+ "rewards/chosen": -2.6913745403289795,
1507
+ "rewards/margins": 3.3568809032440186,
1508
+ "rewards/rejected": -6.048255920410156,
1509
+ "step": 990
1510
+ },
1511
+ {
1512
+ "epoch": 1.221001221001221,
1513
+ "grad_norm": 8.94368139635424,
1514
+ "learning_rate": 1.9763993430846392e-07,
1515
+ "logits/chosen": -1.691082239151001,
1516
+ "logits/rejected": -1.7195104360580444,
1517
+ "logps/chosen": -689.7127075195312,
1518
+ "logps/rejected": -993.8372802734375,
1519
+ "loss": 0.2293,
1520
+ "rewards/accuracies": 0.893750011920929,
1521
+ "rewards/chosen": -2.915121555328369,
1522
+ "rewards/margins": 3.158756971359253,
1523
+ "rewards/rejected": -6.073878288269043,
1524
+ "step": 1000
1525
+ },
1526
+ {
1527
+ "epoch": 1.221001221001221,
1528
+ "eval_logits/chosen": -1.457505226135254,
1529
+ "eval_logits/rejected": -1.4398448467254639,
1530
+ "eval_logps/chosen": -689.9403076171875,
1531
+ "eval_logps/rejected": -1048.287109375,
1532
+ "eval_loss": 0.32239073514938354,
1533
+ "eval_rewards/accuracies": 0.90625,
1534
+ "eval_rewards/chosen": -3.3080387115478516,
1535
+ "eval_rewards/margins": 3.7410764694213867,
1536
+ "eval_rewards/rejected": -7.049115180969238,
1537
+ "eval_runtime": 3.2056,
1538
+ "eval_samples_per_second": 62.391,
1539
+ "eval_steps_per_second": 1.248,
1540
+ "step": 1000
1541
+ },
1542
+ {
1543
+ "epoch": 1.2332112332112333,
1544
+ "grad_norm": 10.431477967240758,
1545
+ "learning_rate": 1.9244205012669066e-07,
1546
+ "logits/chosen": -1.6947963237762451,
1547
+ "logits/rejected": -1.6955578327178955,
1548
+ "logps/chosen": -682.6785278320312,
1549
+ "logps/rejected": -1043.7890625,
1550
+ "loss": 0.2195,
1551
+ "rewards/accuracies": 0.909375011920929,
1552
+ "rewards/chosen": -2.892595052719116,
1553
+ "rewards/margins": 3.645073652267456,
1554
+ "rewards/rejected": -6.537668704986572,
1555
+ "step": 1010
1556
+ },
1557
+ {
1558
+ "epoch": 1.2454212454212454,
1559
+ "grad_norm": 10.45964252043519,
1560
+ "learning_rate": 1.8727031124643738e-07,
1561
+ "logits/chosen": -1.7148557901382446,
1562
+ "logits/rejected": -1.730376958847046,
1563
+ "logps/chosen": -693.7335205078125,
1564
+ "logps/rejected": -1035.718505859375,
1565
+ "loss": 0.2235,
1566
+ "rewards/accuracies": 0.909375011920929,
1567
+ "rewards/chosen": -2.9769744873046875,
1568
+ "rewards/margins": 3.624099016189575,
1569
+ "rewards/rejected": -6.601072788238525,
1570
+ "step": 1020
1571
+ },
1572
+ {
1573
+ "epoch": 1.2576312576312576,
1574
+ "grad_norm": 10.46971576068257,
1575
+ "learning_rate": 1.8212706689439993e-07,
1576
+ "logits/chosen": -1.6916805505752563,
1577
+ "logits/rejected": -1.693860411643982,
1578
+ "logps/chosen": -655.6392211914062,
1579
+ "logps/rejected": -1019.2525634765625,
1580
+ "loss": 0.2083,
1581
+ "rewards/accuracies": 0.925000011920929,
1582
+ "rewards/chosen": -2.5844101905822754,
1583
+ "rewards/margins": 3.817833662033081,
1584
+ "rewards/rejected": -6.402244567871094,
1585
+ "step": 1030
1586
+ },
1587
+ {
1588
+ "epoch": 1.2698412698412698,
1589
+ "grad_norm": 11.412713931602926,
1590
+ "learning_rate": 1.7701465335383148e-07,
1591
+ "logits/chosen": -1.7352428436279297,
1592
+ "logits/rejected": -1.712531328201294,
1593
+ "logps/chosen": -703.946533203125,
1594
+ "logps/rejected": -1039.840087890625,
1595
+ "loss": 0.2246,
1596
+ "rewards/accuracies": 0.9281250238418579,
1597
+ "rewards/chosen": -2.9772469997406006,
1598
+ "rewards/margins": 3.4660041332244873,
1599
+ "rewards/rejected": -6.443251132965088,
1600
+ "step": 1040
1601
+ },
1602
+ {
1603
+ "epoch": 1.282051282051282,
1604
+ "grad_norm": 9.59659946541285,
1605
+ "learning_rate": 1.7193539290330172e-07,
1606
+ "logits/chosen": -1.7665666341781616,
1607
+ "logits/rejected": -1.7859532833099365,
1608
+ "logps/chosen": -693.987548828125,
1609
+ "logps/rejected": -1081.337158203125,
1610
+ "loss": 0.2048,
1611
+ "rewards/accuracies": 0.925000011920929,
1612
+ "rewards/chosen": -2.7806649208068848,
1613
+ "rewards/margins": 3.96142578125,
1614
+ "rewards/rejected": -6.742089748382568,
1615
+ "step": 1050
1616
+ },
1617
+ {
1618
+ "epoch": 1.2942612942612943,
1619
+ "grad_norm": 7.833467318755542,
1620
+ "learning_rate": 1.668915927618183e-07,
1621
+ "logits/chosen": -1.6072509288787842,
1622
+ "logits/rejected": -1.5868537425994873,
1623
+ "logps/chosen": -644.87353515625,
1624
+ "logps/rejected": -992.3282470703125,
1625
+ "loss": 0.2139,
1626
+ "rewards/accuracies": 0.893750011920929,
1627
+ "rewards/chosen": -2.6021721363067627,
1628
+ "rewards/margins": 3.6462604999542236,
1629
+ "rewards/rejected": -6.248432636260986,
1630
+ "step": 1060
1631
+ },
1632
+ {
1633
+ "epoch": 1.3064713064713065,
1634
+ "grad_norm": 8.944995143399364,
1635
+ "learning_rate": 1.618855440407878e-07,
1636
+ "logits/chosen": -1.6417900323867798,
1637
+ "logits/rejected": -1.645308494567871,
1638
+ "logps/chosen": -680.7823486328125,
1639
+ "logps/rejected": -984.8699951171875,
1640
+ "loss": 0.2206,
1641
+ "rewards/accuracies": 0.878125011920929,
1642
+ "rewards/chosen": -2.845156669616699,
1643
+ "rewards/margins": 3.2548797130584717,
1644
+ "rewards/rejected": -6.100037097930908,
1645
+ "step": 1070
1646
+ },
1647
+ {
1648
+ "epoch": 1.3186813186813187,
1649
+ "grad_norm": 8.538205860728207,
1650
+ "learning_rate": 1.5691952070329493e-07,
1651
+ "logits/chosen": -1.5984406471252441,
1652
+ "logits/rejected": -1.5525627136230469,
1653
+ "logps/chosen": -696.5716552734375,
1654
+ "logps/rejected": -1034.56640625,
1655
+ "loss": 0.2183,
1656
+ "rewards/accuracies": 0.9156249761581421,
1657
+ "rewards/chosen": -2.9366581439971924,
1658
+ "rewards/margins": 3.6637485027313232,
1659
+ "rewards/rejected": -6.600405693054199,
1660
+ "step": 1080
1661
+ },
1662
+ {
1663
+ "epoch": 1.3308913308913308,
1664
+ "grad_norm": 12.006671385110291,
1665
+ "learning_rate": 1.519957785311698e-07,
1666
+ "logits/chosen": -1.6169030666351318,
1667
+ "logits/rejected": -1.630112648010254,
1668
+ "logps/chosen": -687.6176147460938,
1669
+ "logps/rejected": -1042.2376708984375,
1670
+ "loss": 0.2274,
1671
+ "rewards/accuracies": 0.940625011920929,
1672
+ "rewards/chosen": -2.9744813442230225,
1673
+ "rewards/margins": 3.7546725273132324,
1674
+ "rewards/rejected": -6.729154109954834,
1675
+ "step": 1090
1676
+ },
1677
+ {
1678
+ "epoch": 1.3431013431013432,
1679
+ "grad_norm": 9.403413375132386,
1680
+ "learning_rate": 1.4711655410031536e-07,
1681
+ "logits/chosen": -1.752509355545044,
1682
+ "logits/rejected": -1.7398014068603516,
1683
+ "logps/chosen": -701.8548583984375,
1684
+ "logps/rejected": -1074.74609375,
1685
+ "loss": 0.2107,
1686
+ "rewards/accuracies": 0.9156249761581421,
1687
+ "rewards/chosen": -2.93528413772583,
1688
+ "rewards/margins": 3.7751307487487793,
1689
+ "rewards/rejected": -6.710414886474609,
1690
+ "step": 1100
1691
+ },
1692
+ {
1693
+ "epoch": 1.3553113553113554,
1694
+ "grad_norm": 9.067370494527864,
1695
+ "learning_rate": 1.422840637647574e-07,
1696
+ "logits/chosen": -1.6540132761001587,
1697
+ "logits/rejected": -1.6520767211914062,
1698
+ "logps/chosen": -648.3968505859375,
1699
+ "logps/rejected": -1009.6447143554688,
1700
+ "loss": 0.2154,
1701
+ "rewards/accuracies": 0.9281250238418579,
1702
+ "rewards/chosen": -2.6349334716796875,
1703
+ "rewards/margins": 3.7205891609191895,
1704
+ "rewards/rejected": -6.355522632598877,
1705
+ "step": 1110
1706
+ },
1707
+ {
1708
+ "epoch": 1.3675213675213675,
1709
+ "grad_norm": 8.8218853819329,
1710
+ "learning_rate": 1.3750050264988172e-07,
1711
+ "logits/chosen": -1.599161148071289,
1712
+ "logits/rejected": -1.6590015888214111,
1713
+ "logps/chosen": -683.4578857421875,
1714
+ "logps/rejected": -1025.697998046875,
1715
+ "loss": 0.2113,
1716
+ "rewards/accuracies": 0.918749988079071,
1717
+ "rewards/chosen": -2.938286542892456,
1718
+ "rewards/margins": 3.5185725688934326,
1719
+ "rewards/rejected": -6.456859588623047,
1720
+ "step": 1120
1721
+ },
1722
+ {
1723
+ "epoch": 1.3797313797313797,
1724
+ "grad_norm": 7.729524060255651,
1725
+ "learning_rate": 1.3276804365531303e-07,
1726
+ "logits/chosen": -1.6903009414672852,
1727
+ "logits/rejected": -1.6954196691513062,
1728
+ "logps/chosen": -684.0424194335938,
1729
+ "logps/rejected": -1034.60205078125,
1730
+ "loss": 0.1965,
1731
+ "rewards/accuracies": 0.9156249761581421,
1732
+ "rewards/chosen": -2.8384480476379395,
1733
+ "rewards/margins": 3.558556079864502,
1734
+ "rewards/rejected": -6.3970046043396,
1735
+ "step": 1130
1736
+ },
1737
+ {
1738
+ "epoch": 1.3919413919413919,
1739
+ "grad_norm": 10.111068058521244,
1740
+ "learning_rate": 1.2808883646789088e-07,
1741
+ "logits/chosen": -1.6597654819488525,
1742
+ "logits/rejected": -1.6594810485839844,
1743
+ "logps/chosen": -678.0111083984375,
1744
+ "logps/rejected": -1045.977783203125,
1745
+ "loss": 0.2093,
1746
+ "rewards/accuracies": 0.9437500238418579,
1747
+ "rewards/chosen": -2.8265321254730225,
1748
+ "rewards/margins": 3.8290627002716064,
1749
+ "rewards/rejected": -6.655595302581787,
1750
+ "step": 1140
1751
+ },
1752
+ {
1753
+ "epoch": 1.404151404151404,
1754
+ "grad_norm": 9.778269285200919,
1755
+ "learning_rate": 1.2346500658518864e-07,
1756
+ "logits/chosen": -1.6785917282104492,
1757
+ "logits/rejected": -1.6684529781341553,
1758
+ "logps/chosen": -682.1680908203125,
1759
+ "logps/rejected": -1051.3212890625,
1760
+ "loss": 0.2278,
1761
+ "rewards/accuracies": 0.925000011920929,
1762
+ "rewards/chosen": -2.902517080307007,
1763
+ "rewards/margins": 3.8359665870666504,
1764
+ "rewards/rejected": -6.738483428955078,
1765
+ "step": 1150
1766
+ },
1767
+ {
1768
+ "epoch": 1.4163614163614164,
1769
+ "grad_norm": 8.55576204479013,
1770
+ "learning_rate": 1.1889865435002117e-07,
1771
+ "logits/chosen": -1.7918224334716797,
1772
+ "logits/rejected": -1.7642894983291626,
1773
+ "logps/chosen": -692.8043823242188,
1774
+ "logps/rejected": -1036.125244140625,
1775
+ "loss": 0.2091,
1776
+ "rewards/accuracies": 0.8968750238418579,
1777
+ "rewards/chosen": -2.8053765296936035,
1778
+ "rewards/margins": 3.668884754180908,
1779
+ "rewards/rejected": -6.474261283874512,
1780
+ "step": 1160
1781
+ },
1782
+ {
1783
+ "epoch": 1.4285714285714286,
1784
+ "grad_norm": 9.250758880138703,
1785
+ "learning_rate": 1.1439185399637888e-07,
1786
+ "logits/chosen": -1.809579849243164,
1787
+ "logits/rejected": -1.820180892944336,
1788
+ "logps/chosen": -689.2313232421875,
1789
+ "logps/rejected": -1069.072021484375,
1790
+ "loss": 0.2072,
1791
+ "rewards/accuracies": 0.9281250238418579,
1792
+ "rewards/chosen": -2.8422112464904785,
1793
+ "rewards/margins": 3.8554039001464844,
1794
+ "rewards/rejected": -6.697615623474121,
1795
+ "step": 1170
1796
+ },
1797
+ {
1798
+ "epoch": 1.4407814407814408,
1799
+ "grad_norm": 11.66443113180578,
1800
+ "learning_rate": 1.099466527072207e-07,
1801
+ "logits/chosen": -1.7422889471054077,
1802
+ "logits/rejected": -1.6775966882705688,
1803
+ "logps/chosen": -724.9702758789062,
1804
+ "logps/rejected": -1071.587890625,
1805
+ "loss": 0.2124,
1806
+ "rewards/accuracies": 0.8968750238418579,
1807
+ "rewards/chosen": -3.1015143394470215,
1808
+ "rewards/margins": 3.6922173500061035,
1809
+ "rewards/rejected": -6.793731689453125,
1810
+ "step": 1180
1811
+ },
1812
+ {
1813
+ "epoch": 1.452991452991453,
1814
+ "grad_norm": 9.357661665183807,
1815
+ "learning_rate": 1.0556506968455556e-07,
1816
+ "logits/chosen": -1.6604530811309814,
1817
+ "logits/rejected": -1.598813533782959,
1818
+ "logps/chosen": -708.178955078125,
1819
+ "logps/rejected": -1078.4857177734375,
1820
+ "loss": 0.2191,
1821
+ "rewards/accuracies": 0.953125,
1822
+ "rewards/chosen": -2.9419569969177246,
1823
+ "rewards/margins": 4.017121315002441,
1824
+ "rewards/rejected": -6.959078311920166,
1825
+ "step": 1190
1826
+ },
1827
+ {
1828
+ "epoch": 1.4652014652014653,
1829
+ "grad_norm": 8.971265853956833,
1830
+ "learning_rate": 1.0124909523223418e-07,
1831
+ "logits/chosen": -1.8122241497039795,
1832
+ "logits/rejected": -1.7499420642852783,
1833
+ "logps/chosen": -676.4575805664062,
1834
+ "logps/rejected": -1026.417724609375,
1835
+ "loss": 0.226,
1836
+ "rewards/accuracies": 0.903124988079071,
1837
+ "rewards/chosen": -2.6309123039245605,
1838
+ "rewards/margins": 3.7246639728546143,
1839
+ "rewards/rejected": -6.355576038360596,
1840
+ "step": 1200
1841
+ },
1842
+ {
1843
+ "epoch": 1.4774114774114775,
1844
+ "grad_norm": 11.17454543526871,
1845
+ "learning_rate": 9.700068985186677e-08,
1846
+ "logits/chosen": -1.7836052179336548,
1847
+ "logits/rejected": -1.7842352390289307,
1848
+ "logps/chosen": -679.8863525390625,
1849
+ "logps/rejected": -1052.3326416015625,
1850
+ "loss": 0.2076,
1851
+ "rewards/accuracies": 0.890625,
1852
+ "rewards/chosen": -2.7537055015563965,
1853
+ "rewards/margins": 3.8916175365448,
1854
+ "rewards/rejected": -6.645322322845459,
1855
+ "step": 1210
1856
+ },
1857
+ {
1858
+ "epoch": 1.4896214896214897,
1859
+ "grad_norm": 8.470663050298606,
1860
+ "learning_rate": 9.282178335227883e-08,
1861
+ "logits/chosen": -1.6586374044418335,
1862
+ "logits/rejected": -1.6548519134521484,
1863
+ "logps/chosen": -673.2451171875,
1864
+ "logps/rejected": -1058.727294921875,
1865
+ "loss": 0.1957,
1866
+ "rewards/accuracies": 0.925000011920929,
1867
+ "rewards/chosen": -2.7371268272399902,
1868
+ "rewards/margins": 3.8924622535705566,
1869
+ "rewards/rejected": -6.629590034484863,
1870
+ "step": 1220
1871
+ },
1872
+ {
1873
+ "epoch": 1.5018315018315018,
1874
+ "grad_norm": 8.98808268846931,
1875
+ "learning_rate": 8.871427397290893e-08,
1876
+ "logits/chosen": -1.6249803304672241,
1877
+ "logits/rejected": -1.6458851099014282,
1878
+ "logps/chosen": -654.238037109375,
1879
+ "logps/rejected": -1073.6612548828125,
1880
+ "loss": 0.1921,
1881
+ "rewards/accuracies": 0.940625011920929,
1882
+ "rewards/chosen": -2.555995464324951,
1883
+ "rewards/margins": 4.273428440093994,
1884
+ "rewards/rejected": -6.8294243812561035,
1885
+ "step": 1230
1886
+ },
1887
+ {
1888
+ "epoch": 1.514041514041514,
1889
+ "grad_norm": 10.028503381886956,
1890
+ "learning_rate": 8.468002752154671e-08,
1891
+ "logits/chosen": -1.7111746072769165,
1892
+ "logits/rejected": -1.7021448612213135,
1893
+ "logps/chosen": -696.2750854492188,
1894
+ "logps/rejected": -1062.916748046875,
1895
+ "loss": 0.2054,
1896
+ "rewards/accuracies": 0.925000011920929,
1897
+ "rewards/chosen": -2.8526463508605957,
1898
+ "rewards/margins": 3.87031626701355,
1899
+ "rewards/rejected": -6.722962856292725,
1900
+ "step": 1240
1901
+ },
1902
+ {
1903
+ "epoch": 1.5262515262515262,
1904
+ "grad_norm": 9.587289277274007,
1905
+ "learning_rate": 8.07208765268021e-08,
1906
+ "logits/chosen": -1.6548601388931274,
1907
+ "logits/rejected": -1.6278877258300781,
1908
+ "logps/chosen": -686.55126953125,
1909
+ "logps/rejected": -1056.9136962890625,
1910
+ "loss": 0.2031,
1911
+ "rewards/accuracies": 0.909375011920929,
1912
+ "rewards/chosen": -2.857362985610962,
1913
+ "rewards/margins": 3.842681407928467,
1914
+ "rewards/rejected": -6.700045108795166,
1915
+ "step": 1250
1916
+ },
1917
+ {
1918
+ "epoch": 1.5384615384615383,
1919
+ "grad_norm": 9.113469988640157,
1920
+ "learning_rate": 7.683861940569217e-08,
1921
+ "logits/chosen": -1.6518821716308594,
1922
+ "logits/rejected": -1.6446526050567627,
1923
+ "logps/chosen": -680.0530395507812,
1924
+ "logps/rejected": -1071.1126708984375,
1925
+ "loss": 0.1992,
1926
+ "rewards/accuracies": 0.9281250238418579,
1927
+ "rewards/chosen": -2.8840444087982178,
1928
+ "rewards/margins": 4.099475383758545,
1929
+ "rewards/rejected": -6.983519077301025,
1930
+ "step": 1260
1931
+ },
1932
+ {
1933
+ "epoch": 1.5506715506715507,
1934
+ "grad_norm": 9.40788034177994,
1935
+ "learning_rate": 7.303501964672246e-08,
1936
+ "logits/chosen": -1.6062475442886353,
1937
+ "logits/rejected": -1.5758030414581299,
1938
+ "logps/chosen": -676.6715087890625,
1939
+ "logps/rejected": -1019.4368896484375,
1940
+ "loss": 0.2274,
1941
+ "rewards/accuracies": 0.875,
1942
+ "rewards/chosen": -2.716747760772705,
1943
+ "rewards/margins": 3.6902427673339844,
1944
+ "rewards/rejected": -6.406990051269531,
1945
+ "step": 1270
1946
+ },
1947
+ {
1948
+ "epoch": 1.5628815628815629,
1949
+ "grad_norm": 10.234752942213406,
1950
+ "learning_rate": 6.931180500883484e-08,
1951
+ "logits/chosen": -1.6409828662872314,
1952
+ "logits/rejected": -1.6017128229141235,
1953
+ "logps/chosen": -686.1589965820312,
1954
+ "logps/rejected": -1049.4881591796875,
1955
+ "loss": 0.2011,
1956
+ "rewards/accuracies": 0.918749988079071,
1957
+ "rewards/chosen": -2.8092753887176514,
1958
+ "rewards/margins": 3.778611421585083,
1959
+ "rewards/rejected": -6.587886810302734,
1960
+ "step": 1280
1961
+ },
1962
+ {
1963
+ "epoch": 1.575091575091575,
1964
+ "grad_norm": 10.648822630887103,
1965
+ "learning_rate": 6.567066673658442e-08,
1966
+ "logits/chosen": -1.5854926109313965,
1967
+ "logits/rejected": -1.5248379707336426,
1968
+ "logps/chosen": -675.6094360351562,
1969
+ "logps/rejected": -1013.9142456054688,
1970
+ "loss": 0.2134,
1971
+ "rewards/accuracies": 0.871874988079071,
1972
+ "rewards/chosen": -2.8324193954467773,
1973
+ "rewards/margins": 3.5213024616241455,
1974
+ "rewards/rejected": -6.353722095489502,
1975
+ "step": 1290
1976
+ },
1977
+ {
1978
+ "epoch": 1.5873015873015874,
1979
+ "grad_norm": 10.089650308836681,
1980
+ "learning_rate": 6.21132587919036e-08,
1981
+ "logits/chosen": -1.593980312347412,
1982
+ "logits/rejected": -1.577946662902832,
1983
+ "logps/chosen": -666.9020385742188,
1984
+ "logps/rejected": -1011.3902587890625,
1985
+ "loss": 0.2021,
1986
+ "rewards/accuracies": 0.893750011920929,
1987
+ "rewards/chosen": -2.7358155250549316,
1988
+ "rewards/margins": 3.5997562408447266,
1989
+ "rewards/rejected": -6.335571765899658,
1990
+ "step": 1300
1991
+ },
1992
+ {
1993
+ "epoch": 1.5995115995115996,
1994
+ "grad_norm": 10.857356047559415,
1995
+ "learning_rate": 5.864119710280158e-08,
1996
+ "logits/chosen": -1.6040117740631104,
1997
+ "logits/rejected": -1.6094341278076172,
1998
+ "logps/chosen": -694.5184326171875,
1999
+ "logps/rejected": -1058.58740234375,
2000
+ "loss": 0.2002,
2001
+ "rewards/accuracies": 0.9281250238418579,
2002
+ "rewards/chosen": -2.9541563987731934,
2003
+ "rewards/margins": 3.760449171066284,
2004
+ "rewards/rejected": -6.714605808258057,
2005
+ "step": 1310
2006
+ },
2007
+ {
2008
+ "epoch": 1.6117216117216118,
2009
+ "grad_norm": 14.265677703397381,
2010
+ "learning_rate": 5.525605882933965e-08,
2011
+ "logits/chosen": -1.623427391052246,
2012
+ "logits/rejected": -1.6019861698150635,
2013
+ "logps/chosen": -692.2745971679688,
2014
+ "logps/rejected": -1085.745361328125,
2015
+ "loss": 0.1973,
2016
+ "rewards/accuracies": 0.8999999761581421,
2017
+ "rewards/chosen": -2.8990519046783447,
2018
+ "rewards/margins": 4.028355598449707,
2019
+ "rewards/rejected": -6.927407741546631,
2020
+ "step": 1320
2021
+ },
2022
+ {
2023
+ "epoch": 1.623931623931624,
2024
+ "grad_norm": 9.076988920336841,
2025
+ "learning_rate": 5.1959381647217665e-08,
2026
+ "logits/chosen": -1.5620160102844238,
2027
+ "logits/rejected": -1.5626099109649658,
2028
+ "logps/chosen": -688.0601806640625,
2029
+ "logps/rejected": -1034.831298828125,
2030
+ "loss": 0.205,
2031
+ "rewards/accuracies": 0.90625,
2032
+ "rewards/chosen": -2.8957722187042236,
2033
+ "rewards/margins": 3.635453701019287,
2034
+ "rewards/rejected": -6.53122615814209,
2035
+ "step": 1330
2036
+ },
2037
+ {
2038
+ "epoch": 1.636141636141636,
2039
+ "grad_norm": 11.741004645166157,
2040
+ "learning_rate": 4.875266304929496e-08,
2041
+ "logits/chosen": -1.5946216583251953,
2042
+ "logits/rejected": -1.552394151687622,
2043
+ "logps/chosen": -699.3309326171875,
2044
+ "logps/rejected": -1056.736083984375,
2045
+ "loss": 0.203,
2046
+ "rewards/accuracies": 0.9281250238418579,
2047
+ "rewards/chosen": -3.076303005218506,
2048
+ "rewards/margins": 3.665733814239502,
2049
+ "rewards/rejected": -6.74203634262085,
2050
+ "step": 1340
2051
+ },
2052
+ {
2053
+ "epoch": 1.6483516483516483,
2054
+ "grad_norm": 11.38093239647967,
2055
+ "learning_rate": 4.5637359665365025e-08,
2056
+ "logits/chosen": -1.5630689859390259,
2057
+ "logits/rejected": -1.5610148906707764,
2058
+ "logps/chosen": -688.2283325195312,
2059
+ "logps/rejected": -1071.9249267578125,
2060
+ "loss": 0.1984,
2061
+ "rewards/accuracies": 0.921875,
2062
+ "rewards/chosen": -2.906219244003296,
2063
+ "rewards/margins": 3.8943076133728027,
2064
+ "rewards/rejected": -6.8005266189575195,
2065
+ "step": 1350
2066
+ },
2067
+ {
2068
+ "epoch": 1.6605616605616604,
2069
+ "grad_norm": 9.41029423304621,
2070
+ "learning_rate": 4.2614886600491115e-08,
2071
+ "logits/chosen": -1.521751046180725,
2072
+ "logits/rejected": -1.528058409690857,
2073
+ "logps/chosen": -666.5167236328125,
2074
+ "logps/rejected": -1023.7020263671875,
2075
+ "loss": 0.2189,
2076
+ "rewards/accuracies": 0.8999999761581421,
2077
+ "rewards/chosen": -2.8758678436279297,
2078
+ "rewards/margins": 3.6217830181121826,
2079
+ "rewards/rejected": -6.497651100158691,
2080
+ "step": 1360
2081
+ },
2082
+ {
2083
+ "epoch": 1.6727716727716728,
2084
+ "grad_norm": 10.63202381564192,
2085
+ "learning_rate": 3.968661679220467e-08,
2086
+ "logits/chosen": -1.7034133672714233,
2087
+ "logits/rejected": -1.7073980569839478,
2088
+ "logps/chosen": -703.4854736328125,
2089
+ "logps/rejected": -1088.0830078125,
2090
+ "loss": 0.2113,
2091
+ "rewards/accuracies": 0.878125011920929,
2092
+ "rewards/chosen": -3.028465747833252,
2093
+ "rewards/margins": 3.8890113830566406,
2094
+ "rewards/rejected": -6.917477607727051,
2095
+ "step": 1370
2096
+ },
2097
+ {
2098
+ "epoch": 1.684981684981685,
2099
+ "grad_norm": 8.59601070770463,
2100
+ "learning_rate": 3.685388038685811e-08,
2101
+ "logits/chosen": -1.572533369064331,
2102
+ "logits/rejected": -1.5645427703857422,
2103
+ "logps/chosen": -689.0482177734375,
2104
+ "logps/rejected": -1043.1580810546875,
2105
+ "loss": 0.1907,
2106
+ "rewards/accuracies": 0.918749988079071,
2107
+ "rewards/chosen": -2.8478474617004395,
2108
+ "rewards/margins": 3.773597240447998,
2109
+ "rewards/rejected": -6.621443748474121,
2110
+ "step": 1380
2111
+ },
2112
+ {
2113
+ "epoch": 1.6971916971916972,
2114
+ "grad_norm": 8.096290604656218,
2115
+ "learning_rate": 3.41179641354146e-08,
2116
+ "logits/chosen": -1.5869381427764893,
2117
+ "logits/rejected": -1.5721813440322876,
2118
+ "logps/chosen": -709.0938110351562,
2119
+ "logps/rejected": -1064.73681640625,
2120
+ "loss": 0.2101,
2121
+ "rewards/accuracies": 0.925000011920929,
2122
+ "rewards/chosen": -3.0553669929504395,
2123
+ "rewards/margins": 3.6293067932128906,
2124
+ "rewards/rejected": -6.684674263000488,
2125
+ "step": 1390
2126
+ },
2127
+ {
2128
+ "epoch": 1.7094017094017095,
2129
+ "grad_norm": 10.34775868024534,
2130
+ "learning_rate": 3.1480110808950746e-08,
2131
+ "logits/chosen": -1.6079189777374268,
2132
+ "logits/rejected": -1.5532805919647217,
2133
+ "logps/chosen": -701.6239624023438,
2134
+ "logps/rejected": -1058.663330078125,
2135
+ "loss": 0.202,
2136
+ "rewards/accuracies": 0.921875,
2137
+ "rewards/chosen": -2.9393839836120605,
2138
+ "rewards/margins": 3.915294647216797,
2139
+ "rewards/rejected": -6.854678153991699,
2140
+ "step": 1400
2141
+ },
2142
+ {
2143
+ "epoch": 1.7216117216117217,
2144
+ "grad_norm": 8.551636954434649,
2145
+ "learning_rate": 2.8941518634136047e-08,
2146
+ "logits/chosen": -1.5729877948760986,
2147
+ "logits/rejected": -1.5950627326965332,
2148
+ "logps/chosen": -667.7506713867188,
2149
+ "logps/rejected": -1075.386962890625,
2150
+ "loss": 0.2036,
2151
+ "rewards/accuracies": 0.921875,
2152
+ "rewards/chosen": -2.7873880863189697,
2153
+ "rewards/margins": 4.09014892578125,
2154
+ "rewards/rejected": -6.877536773681641,
2155
+ "step": 1410
2156
+ },
2157
+ {
2158
+ "epoch": 1.7338217338217339,
2159
+ "grad_norm": 11.293363203538648,
2160
+ "learning_rate": 2.6503340748947083e-08,
2161
+ "logits/chosen": -1.6414867639541626,
2162
+ "logits/rejected": -1.6144354343414307,
2163
+ "logps/chosen": -710.8462524414062,
2164
+ "logps/rejected": -1076.911865234375,
2165
+ "loss": 0.2096,
2166
+ "rewards/accuracies": 0.934374988079071,
2167
+ "rewards/chosen": -3.167412519454956,
2168
+ "rewards/margins": 3.803969621658325,
2169
+ "rewards/rejected": -6.971382141113281,
2170
+ "step": 1420
2171
+ },
2172
+ {
2173
+ "epoch": 1.746031746031746,
2174
+ "grad_norm": 9.537502992918926,
2175
+ "learning_rate": 2.4166684678862208e-08,
2176
+ "logits/chosen": -1.5303064584732056,
2177
+ "logits/rejected": -1.523916482925415,
2178
+ "logps/chosen": -673.16748046875,
2179
+ "logps/rejected": -1046.1734619140625,
2180
+ "loss": 0.2136,
2181
+ "rewards/accuracies": 0.9156249761581421,
2182
+ "rewards/chosen": -2.9092774391174316,
2183
+ "rewards/margins": 3.6622211933135986,
2184
+ "rewards/rejected": -6.571497917175293,
2185
+ "step": 1430
2186
+ },
2187
+ {
2188
+ "epoch": 1.7582417582417582,
2189
+ "grad_norm": 11.313708406928527,
2190
+ "learning_rate": 2.1932611833775843e-08,
2191
+ "logits/chosen": -1.6144897937774658,
2192
+ "logits/rejected": -1.5737859010696411,
2193
+ "logps/chosen": -713.6920166015625,
2194
+ "logps/rejected": -1063.091064453125,
2195
+ "loss": 0.2143,
2196
+ "rewards/accuracies": 0.9125000238418579,
2197
+ "rewards/chosen": -3.0494279861450195,
2198
+ "rewards/margins": 3.7339680194854736,
2199
+ "rewards/rejected": -6.7833967208862305,
2200
+ "step": 1440
2201
+ },
2202
+ {
2203
+ "epoch": 1.7704517704517704,
2204
+ "grad_norm": 12.659032905074808,
2205
+ "learning_rate": 1.9802137025860394e-08,
2206
+ "logits/chosen": -1.541495680809021,
2207
+ "logits/rejected": -1.508460283279419,
2208
+ "logps/chosen": -672.1906127929688,
2209
+ "logps/rejected": -1020.8779296875,
2210
+ "loss": 0.2044,
2211
+ "rewards/accuracies": 0.9375,
2212
+ "rewards/chosen": -2.761587142944336,
2213
+ "rewards/margins": 3.6672203540802,
2214
+ "rewards/rejected": -6.428807258605957,
2215
+ "step": 1450
2216
+ },
2217
+ {
2218
+ "epoch": 1.7826617826617825,
2219
+ "grad_norm": 8.561486281617269,
2220
+ "learning_rate": 1.7776228008594962e-08,
2221
+ "logits/chosen": -1.5731874704360962,
2222
+ "logits/rejected": -1.6000001430511475,
2223
+ "logps/chosen": -699.12060546875,
2224
+ "logps/rejected": -1043.0130615234375,
2225
+ "loss": 0.1974,
2226
+ "rewards/accuracies": 0.909375011920929,
2227
+ "rewards/chosen": -2.973306894302368,
2228
+ "rewards/margins": 3.508462429046631,
2229
+ "rewards/rejected": -6.481769561767578,
2230
+ "step": 1460
2231
+ },
2232
+ {
2233
+ "epoch": 1.7948717948717947,
2234
+ "grad_norm": 9.72206499571952,
2235
+ "learning_rate": 1.5855805037169682e-08,
2236
+ "logits/chosen": -1.6329854726791382,
2237
+ "logits/rejected": -1.6412932872772217,
2238
+ "logps/chosen": -683.8406372070312,
2239
+ "logps/rejected": -1081.219482421875,
2240
+ "loss": 0.2127,
2241
+ "rewards/accuracies": 0.903124988079071,
2242
+ "rewards/chosen": -2.91072678565979,
2243
+ "rewards/margins": 4.06986665725708,
2244
+ "rewards/rejected": -6.980593681335449,
2245
+ "step": 1470
2246
+ },
2247
+ {
2248
+ "epoch": 1.807081807081807,
2249
+ "grad_norm": 9.226812102547623,
2250
+ "learning_rate": 1.4041740450466383e-08,
2251
+ "logits/chosen": -1.667553186416626,
2252
+ "logits/rejected": -1.6831690073013306,
2253
+ "logps/chosen": -684.6526489257812,
2254
+ "logps/rejected": -1098.877197265625,
2255
+ "loss": 0.2142,
2256
+ "rewards/accuracies": 0.9156249761581421,
2257
+ "rewards/chosen": -2.8064825534820557,
2258
+ "rewards/margins": 4.242220878601074,
2259
+ "rewards/rejected": -7.048703670501709,
2260
+ "step": 1480
2261
+ },
2262
+ {
2263
+ "epoch": 1.8192918192918193,
2264
+ "grad_norm": 10.957646684560045,
2265
+ "learning_rate": 1.2334858274804655e-08,
2266
+ "logits/chosen": -1.5813372135162354,
2267
+ "logits/rejected": -1.5335392951965332,
2268
+ "logps/chosen": -666.4457397460938,
2269
+ "logps/rejected": -1035.133544921875,
2270
+ "loss": 0.1955,
2271
+ "rewards/accuracies": 0.953125,
2272
+ "rewards/chosen": -2.7881767749786377,
2273
+ "rewards/margins": 3.741603136062622,
2274
+ "rewards/rejected": -6.529780387878418,
2275
+ "step": 1490
2276
+ },
2277
+ {
2278
+ "epoch": 1.8315018315018317,
2279
+ "grad_norm": 9.22473440765943,
2280
+ "learning_rate": 1.0735933849633561e-08,
2281
+ "logits/chosen": -1.5356563329696655,
2282
+ "logits/rejected": -1.5444982051849365,
2283
+ "logps/chosen": -656.3917236328125,
2284
+ "logps/rejected": -1014.2698974609375,
2285
+ "loss": 0.2173,
2286
+ "rewards/accuracies": 0.9125000238418579,
2287
+ "rewards/chosen": -2.679664134979248,
2288
+ "rewards/margins": 3.69049072265625,
2289
+ "rewards/rejected": -6.370155334472656,
2290
+ "step": 1500
2291
+ },
2292
+ {
2293
+ "epoch": 1.8437118437118438,
2294
+ "grad_norm": 10.099888495331802,
2295
+ "learning_rate": 9.245693475338906e-09,
2296
+ "logits/chosen": -1.6426986455917358,
2297
+ "logits/rejected": -1.636652946472168,
2298
+ "logps/chosen": -706.7011108398438,
2299
+ "logps/rejected": -1085.6732177734375,
2300
+ "loss": 0.2117,
2301
+ "rewards/accuracies": 0.918749988079071,
2302
+ "rewards/chosen": -2.9595284461975098,
2303
+ "rewards/margins": 3.9240341186523438,
2304
+ "rewards/rejected": -6.8835625648498535,
2305
+ "step": 1510
2306
+ },
2307
+ {
2308
+ "epoch": 1.855921855921856,
2309
+ "grad_norm": 11.718296845103126,
2310
+ "learning_rate": 7.86481408332651e-09,
2311
+ "logits/chosen": -1.63693368434906,
2312
+ "logits/rejected": -1.6097259521484375,
2313
+ "logps/chosen": -700.6412353515625,
2314
+ "logps/rejected": -1046.5230712890625,
2315
+ "loss": 0.2173,
2316
+ "rewards/accuracies": 0.90625,
2317
+ "rewards/chosen": -3.012932300567627,
2318
+ "rewards/margins": 3.689767837524414,
2319
+ "rewards/rejected": -6.702700614929199,
2320
+ "step": 1520
2321
+ },
2322
+ {
2323
+ "epoch": 1.8681318681318682,
2324
+ "grad_norm": 11.61323707998988,
2325
+ "learning_rate": 6.593922928530754e-09,
2326
+ "logits/chosen": -1.6446201801300049,
2327
+ "logits/rejected": -1.648411750793457,
2328
+ "logps/chosen": -701.7564697265625,
2329
+ "logps/rejected": -1085.4237060546875,
2330
+ "loss": 0.2141,
2331
+ "rewards/accuracies": 0.9125000238418579,
2332
+ "rewards/chosen": -2.927586317062378,
2333
+ "rewards/margins": 3.9655041694641113,
2334
+ "rewards/rejected": -6.893090724945068,
2335
+ "step": 1530
2336
+ },
2337
+ {
2338
+ "epoch": 1.8803418803418803,
2339
+ "grad_norm": 10.971983293596454,
2340
+ "learning_rate": 5.433597304488113e-09,
2341
+ "logits/chosen": -1.602278709411621,
2342
+ "logits/rejected": -1.5537140369415283,
2343
+ "logps/chosen": -681.44677734375,
2344
+ "logps/rejected": -1042.9771728515625,
2345
+ "loss": 0.2152,
2346
+ "rewards/accuracies": 0.934374988079071,
2347
+ "rewards/chosen": -2.83951997756958,
2348
+ "rewards/margins": 3.8127970695495605,
2349
+ "rewards/rejected": -6.652318000793457,
2350
+ "step": 1540
2351
+ },
2352
+ {
2353
+ "epoch": 1.8925518925518925,
2354
+ "grad_norm": 9.023723155797857,
2355
+ "learning_rate": 4.384364281105973e-09,
2356
+ "logits/chosen": -1.5327403545379639,
2357
+ "logits/rejected": -1.5472204685211182,
2358
+ "logps/chosen": -680.0806274414062,
2359
+ "logps/rejected": -1091.057861328125,
2360
+ "loss": 0.2025,
2361
+ "rewards/accuracies": 0.90625,
2362
+ "rewards/chosen": -2.8416030406951904,
2363
+ "rewards/margins": 4.13409948348999,
2364
+ "rewards/rejected": -6.97570276260376,
2365
+ "step": 1550
2366
+ },
2367
+ {
2368
+ "epoch": 1.9047619047619047,
2369
+ "grad_norm": 9.01098707409564,
2370
+ "learning_rate": 3.4467004652442842e-09,
2371
+ "logits/chosen": -1.4875050783157349,
2372
+ "logits/rejected": -1.4984132051467896,
2373
+ "logps/chosen": -679.8426513671875,
2374
+ "logps/rejected": -1051.27392578125,
2375
+ "loss": 0.1913,
2376
+ "rewards/accuracies": 0.925000011920929,
2377
+ "rewards/chosen": -2.8839166164398193,
2378
+ "rewards/margins": 3.8126988410949707,
2379
+ "rewards/rejected": -6.696614742279053,
2380
+ "step": 1560
2381
+ },
2382
+ {
2383
+ "epoch": 1.9169719169719168,
2384
+ "grad_norm": 9.97369900546846,
2385
+ "learning_rate": 2.6210317842206565e-09,
2386
+ "logits/chosen": -1.5761525630950928,
2387
+ "logits/rejected": -1.5815128087997437,
2388
+ "logps/chosen": -681.6040649414062,
2389
+ "logps/rejected": -1054.3304443359375,
2390
+ "loss": 0.2155,
2391
+ "rewards/accuracies": 0.875,
2392
+ "rewards/chosen": -2.8990960121154785,
2393
+ "rewards/margins": 3.823047637939453,
2394
+ "rewards/rejected": -6.722143650054932,
2395
+ "step": 1570
2396
+ },
2397
+ {
2398
+ "epoch": 1.9291819291819292,
2399
+ "grad_norm": 9.735400391893615,
2400
+ "learning_rate": 1.9077332923353728e-09,
2401
+ "logits/chosen": -1.554095983505249,
2402
+ "logits/rejected": -1.505615234375,
2403
+ "logps/chosen": -667.6380004882812,
2404
+ "logps/rejected": -1052.5147705078125,
2405
+ "loss": 0.1958,
2406
+ "rewards/accuracies": 0.9312499761581421,
2407
+ "rewards/chosen": -2.7255098819732666,
2408
+ "rewards/margins": 4.0490617752075195,
2409
+ "rewards/rejected": -6.774571895599365,
2410
+ "step": 1580
2411
+ },
2412
+ {
2413
+ "epoch": 1.9413919413919414,
2414
+ "grad_norm": 11.38486846510476,
2415
+ "learning_rate": 1.307129000505891e-09,
2416
+ "logits/chosen": -1.550065279006958,
2417
+ "logits/rejected": -1.5272270441055298,
2418
+ "logps/chosen": -644.3756103515625,
2419
+ "logps/rejected": -1003.22314453125,
2420
+ "loss": 0.2359,
2421
+ "rewards/accuracies": 0.921875,
2422
+ "rewards/chosen": -2.641263484954834,
2423
+ "rewards/margins": 3.7743351459503174,
2424
+ "rewards/rejected": -6.415598392486572,
2425
+ "step": 1590
2426
+ },
2427
+ {
2428
+ "epoch": 1.9536019536019538,
2429
+ "grad_norm": 12.537603692123119,
2430
+ "learning_rate": 8.194917290869907e-10,
2431
+ "logits/chosen": -1.638336420059204,
2432
+ "logits/rejected": -1.6147558689117432,
2433
+ "logps/chosen": -701.1859130859375,
2434
+ "logps/rejected": -1065.903076171875,
2435
+ "loss": 0.2184,
2436
+ "rewards/accuracies": 0.921875,
2437
+ "rewards/chosen": -2.792646884918213,
2438
+ "rewards/margins": 3.8054585456848145,
2439
+ "rewards/rejected": -6.598104953765869,
2440
+ "step": 1600
2441
+ },
2442
+ {
2443
+ "epoch": 1.965811965811966,
2444
+ "grad_norm": 9.457547390988431,
2445
+ "learning_rate": 4.450429839439884e-10,
2446
+ "logits/chosen": -1.52217435836792,
2447
+ "logits/rejected": -1.5371259450912476,
2448
+ "logps/chosen": -676.3851318359375,
2449
+ "logps/rejected": -1045.146728515625,
2450
+ "loss": 0.2196,
2451
+ "rewards/accuracies": 0.918749988079071,
2452
+ "rewards/chosen": -2.9359381198883057,
2453
+ "rewards/margins": 3.676692247390747,
2454
+ "rewards/rejected": -6.612630367279053,
2455
+ "step": 1610
2456
+ },
2457
+ {
2458
+ "epoch": 1.978021978021978,
2459
+ "grad_norm": 9.281764750094489,
2460
+ "learning_rate": 1.8395285583530652e-10,
2461
+ "logits/chosen": -1.5917165279388428,
2462
+ "logits/rejected": -1.591761827468872,
2463
+ "logps/chosen": -704.8922729492188,
2464
+ "logps/rejected": -1085.7891845703125,
2465
+ "loss": 0.2179,
2466
+ "rewards/accuracies": 0.9281250238418579,
2467
+ "rewards/chosen": -2.95951509475708,
2468
+ "rewards/margins": 3.9156622886657715,
2469
+ "rewards/rejected": -6.87517786026001,
2470
+ "step": 1620
2471
+ },
2472
+ {
2473
+ "epoch": 1.9902319902319903,
2474
+ "grad_norm": 9.654531338082007,
2475
+ "learning_rate": 3.63399431498046e-11,
2476
+ "logits/chosen": -1.483194351196289,
2477
+ "logits/rejected": -1.4703377485275269,
2478
+ "logps/chosen": -654.5201416015625,
2479
+ "logps/rejected": -1018.1467895507812,
2480
+ "loss": 0.2121,
2481
+ "rewards/accuracies": 0.925000011920929,
2482
+ "rewards/chosen": -2.7714340686798096,
2483
+ "rewards/margins": 3.737165927886963,
2484
+ "rewards/rejected": -6.50860071182251,
2485
+ "step": 1630
2486
+ },
2487
+ {
2488
+ "epoch": 2.0,
2489
+ "step": 1638,
2490
+ "total_flos": 0.0,
2491
+ "train_loss": 0.2817063624168927,
2492
+ "train_runtime": 11455.6006,
2493
+ "train_samples_per_second": 36.602,
2494
+ "train_steps_per_second": 0.143
2495
+ }
2496
+ ],
2497
+ "logging_steps": 10,
2498
+ "max_steps": 1638,
2499
+ "num_input_tokens_seen": 0,
2500
+ "num_train_epochs": 2,
2501
+ "save_steps": 100,
2502
+ "stateful_callbacks": {
2503
+ "TrainerControl": {
2504
+ "args": {
2505
+ "should_epoch_stop": false,
2506
+ "should_evaluate": false,
2507
+ "should_log": false,
2508
+ "should_save": true,
2509
+ "should_training_stop": true
2510
+ },
2511
+ "attributes": {}
2512
+ }
2513
+ },
2514
+ "total_flos": 0.0,
2515
+ "train_batch_size": 8,
2516
+ "trial_name": null,
2517
+ "trial_params": null
2518
+ }