ludekcizinsky commited on
Commit
4c87cf5
·
verified ·
1 Parent(s): 7ac4bbe

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 32,
20
+ "rank_pattern": {},
21
+ "revision": "unsloth",
22
+ "target_modules": [
23
+ "up_proj",
24
+ "down_proj",
25
+ "gate_proj",
26
+ "k_proj",
27
+ "q_proj",
28
+ "o_proj",
29
+ "v_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": true
34
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:691689e3469608c048be4d31b5ba22b8a523cb0b4eb357955fac1b15fb1cd356
3
+ size 119597408
added_tokens.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|assistant|>": 32001,
3
+ "<|endoftext|>": 32000,
4
+ "<|end|>": 32007,
5
+ "<|placeholder1|>": 32002,
6
+ "<|placeholder2|>": 32003,
7
+ "<|placeholder3|>": 32004,
8
+ "<|placeholder4|>": 32005,
9
+ "<|placeholder5|>": 32008,
10
+ "<|placeholder6|>": 32009,
11
+ "<|system|>": 32006,
12
+ "<|user|>": 32010
13
+ }
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 21.99049959550403,
4
+ "train_runtime": 22427.3619,
5
+ "train_samples_per_second": 0.906,
6
+ "train_steps_per_second": 0.057
7
+ }
checkpoint-600/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
checkpoint-600/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 32,
20
+ "rank_pattern": {},
21
+ "revision": "unsloth",
22
+ "target_modules": [
23
+ "up_proj",
24
+ "down_proj",
25
+ "gate_proj",
26
+ "k_proj",
27
+ "q_proj",
28
+ "o_proj",
29
+ "v_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": true
34
+ }
checkpoint-600/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb312926d0e5163c10cdf5e557ff1d7b6725738f96287562e12106f6e63021de
3
+ size 239135488
checkpoint-600/added_tokens.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|assistant|>": 32001,
3
+ "<|endoftext|>": 32000,
4
+ "<|end|>": 32007,
5
+ "<|placeholder1|>": 32002,
6
+ "<|placeholder2|>": 32003,
7
+ "<|placeholder3|>": 32004,
8
+ "<|placeholder4|>": 32005,
9
+ "<|placeholder5|>": 32008,
10
+ "<|placeholder6|>": 32009,
11
+ "<|system|>": 32006,
12
+ "<|user|>": 32010
13
+ }
checkpoint-600/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecec7e17abb14704b36cc5e250a2b4233f60b449da511c9c0d76587b256ff97c
3
+ size 120296724
checkpoint-600/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e17276c536f724510552c073087345b89701056c13843196191b069e70f168b
3
+ size 14244
checkpoint-600/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ce082c364b90df902b6a589444b958dd9f12787f8dd08d95fd510d4befb299e
3
+ size 1064
checkpoint-600/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
checkpoint-600/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-600/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
checkpoint-600/tokenizer_config.json ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": true,
26
+ "single_word": false,
27
+ "special": false
28
+ },
29
+ "32000": {
30
+ "content": "<|endoftext|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "<|assistant|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": true,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32002": {
46
+ "content": "<|placeholder1|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": true,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32003": {
54
+ "content": "<|placeholder2|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": true,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "32004": {
62
+ "content": "<|placeholder3|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": true,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "32005": {
70
+ "content": "<|placeholder4|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": true,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "32006": {
78
+ "content": "<|system|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": true,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "32007": {
86
+ "content": "<|end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": true,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "32008": {
94
+ "content": "<|placeholder5|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": true,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "32009": {
102
+ "content": "<|placeholder6|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": true,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "32010": {
110
+ "content": "<|user|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": true,
114
+ "single_word": false,
115
+ "special": true
116
+ }
117
+ },
118
+ "bos_token": "<s>",
119
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|user|>' + '\n' + message['content'] + '<|end|>' + '\n' + '<|assistant|>' + '\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|end|>' + '\n'}}{% endif %}{% endfor %}",
120
+ "clean_up_tokenization_spaces": false,
121
+ "eos_token": "<|endoftext|>",
122
+ "legacy": false,
123
+ "model_max_length": 4096,
124
+ "pad_token": "<|endoftext|>",
125
+ "padding_side": "right",
126
+ "sp_model_kwargs": {},
127
+ "tokenizer_class": "LlamaTokenizer",
128
+ "unk_token": "<unk>",
129
+ "use_default_system_prompt": false
130
+ }
checkpoint-600/trainer_state.json ADDED
@@ -0,0 +1,1929 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 21.601177215576172,
3
+ "best_model_checkpoint": "./output/checkpoints/2024-05-27_09-02-19/checkpoint-600",
4
+ "epoch": 0.47206923682140045,
5
+ "eval_steps": 100,
6
+ "global_step": 600,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.003933910306845004,
13
+ "grad_norm": 26.446468353271484,
14
+ "learning_rate": 9.375000000000001e-07,
15
+ "logits/chosen": -0.2329835593700409,
16
+ "logits/rejected": -0.7131723165512085,
17
+ "logps/chosen": -1.0090148448944092,
18
+ "logps/rejected": -1.6766555309295654,
19
+ "loss": 25.0031,
20
+ "rewards/accuracies": 0.1875,
21
+ "rewards/chosen": 8.527375939593185e-06,
22
+ "rewards/margins": -3.058705624425784e-05,
23
+ "rewards/rejected": 3.911443127435632e-05,
24
+ "step": 5
25
+ },
26
+ {
27
+ "epoch": 0.007867820613690008,
28
+ "grad_norm": 11.936881065368652,
29
+ "learning_rate": 2.5e-06,
30
+ "logits/chosen": -0.396948903799057,
31
+ "logits/rejected": -0.7360211610794067,
32
+ "logps/chosen": -0.8984262347221375,
33
+ "logps/rejected": -1.1693015098571777,
34
+ "loss": 24.9925,
35
+ "rewards/accuracies": 0.5874999761581421,
36
+ "rewards/chosen": -4.258692206349224e-05,
37
+ "rewards/margins": 7.496408943552524e-05,
38
+ "rewards/rejected": -0.00011755101149901748,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.011801730920535013,
43
+ "grad_norm": 13.576423645019531,
44
+ "learning_rate": 4.0625000000000005e-06,
45
+ "logits/chosen": -0.3573324680328369,
46
+ "logits/rejected": -0.6578253507614136,
47
+ "logps/chosen": -0.8142125010490417,
48
+ "logps/rejected": -1.0063048601150513,
49
+ "loss": 24.98,
50
+ "rewards/accuracies": 0.6499999761581421,
51
+ "rewards/chosen": -0.00027080357540398836,
52
+ "rewards/margins": 0.00020028329163324088,
53
+ "rewards/rejected": -0.00047108688158914447,
54
+ "step": 15
55
+ },
56
+ {
57
+ "epoch": 0.015735641227380016,
58
+ "grad_norm": 34.40192794799805,
59
+ "learning_rate": 5.3125e-06,
60
+ "logits/chosen": -0.3880882263183594,
61
+ "logits/rejected": -0.7228592038154602,
62
+ "logps/chosen": -1.1428436040878296,
63
+ "logps/rejected": -1.567692756652832,
64
+ "loss": 24.8648,
65
+ "rewards/accuracies": 0.550000011920929,
66
+ "rewards/chosen": -0.001833672053180635,
67
+ "rewards/margins": 0.0014055297942832112,
68
+ "rewards/rejected": -0.003239201847463846,
69
+ "step": 20
70
+ },
71
+ {
72
+ "epoch": 0.01966955153422502,
73
+ "grad_norm": 16.62054443359375,
74
+ "learning_rate": 6.875e-06,
75
+ "logits/chosen": -0.25890520215034485,
76
+ "logits/rejected": -0.7020931839942932,
77
+ "logps/chosen": -1.212777853012085,
78
+ "logps/rejected": -1.3589212894439697,
79
+ "loss": 24.939,
80
+ "rewards/accuracies": 0.5625,
81
+ "rewards/chosen": -0.0046936506405472755,
82
+ "rewards/margins": 0.0006289022276178002,
83
+ "rewards/rejected": -0.005322552751749754,
84
+ "step": 25
85
+ },
86
+ {
87
+ "epoch": 0.023603461841070025,
88
+ "grad_norm": 22.588180541992188,
89
+ "learning_rate": 8.4375e-06,
90
+ "logits/chosen": -0.3284154236316681,
91
+ "logits/rejected": -0.6061900854110718,
92
+ "logps/chosen": -0.9161252975463867,
93
+ "logps/rejected": -1.1756784915924072,
94
+ "loss": 24.7008,
95
+ "rewards/accuracies": 0.675000011920929,
96
+ "rewards/chosen": -0.004226677119731903,
97
+ "rewards/margins": 0.003145116614177823,
98
+ "rewards/rejected": -0.0073717935010790825,
99
+ "step": 30
100
+ },
101
+ {
102
+ "epoch": 0.02753737214791503,
103
+ "grad_norm": 35.44740295410156,
104
+ "learning_rate": 1e-05,
105
+ "logits/chosen": -0.4888971447944641,
106
+ "logits/rejected": -0.7553955912590027,
107
+ "logps/chosen": -1.252327561378479,
108
+ "logps/rejected": -1.473224401473999,
109
+ "loss": 24.5665,
110
+ "rewards/accuracies": 0.574999988079071,
111
+ "rewards/chosen": -0.013578332960605621,
112
+ "rewards/margins": 0.004552872385829687,
113
+ "rewards/rejected": -0.018131205812096596,
114
+ "step": 35
115
+ },
116
+ {
117
+ "epoch": 0.03147128245476003,
118
+ "grad_norm": 32.20027160644531,
119
+ "learning_rate": 1.1562500000000002e-05,
120
+ "logits/chosen": -0.4067641794681549,
121
+ "logits/rejected": -0.7352877855300903,
122
+ "logps/chosen": -1.055959939956665,
123
+ "logps/rejected": -1.4485868215560913,
124
+ "loss": 24.0967,
125
+ "rewards/accuracies": 0.637499988079071,
126
+ "rewards/chosen": -0.016065727919340134,
127
+ "rewards/margins": 0.011377329006791115,
128
+ "rewards/rejected": -0.02744305692613125,
129
+ "step": 40
130
+ },
131
+ {
132
+ "epoch": 0.03540519276160504,
133
+ "grad_norm": NaN,
134
+ "learning_rate": 1.2812500000000001e-05,
135
+ "logits/chosen": -0.7447024583816528,
136
+ "logits/rejected": -1.0448763370513916,
137
+ "logps/chosen": -1.723064661026001,
138
+ "logps/rejected": -2.249486207962036,
139
+ "loss": 24.0293,
140
+ "rewards/accuracies": 0.574999988079071,
141
+ "rewards/chosen": -0.041548244655132294,
142
+ "rewards/margins": 0.013970533385872841,
143
+ "rewards/rejected": -0.05551878362894058,
144
+ "step": 45
145
+ },
146
+ {
147
+ "epoch": 0.03933910306845004,
148
+ "grad_norm": 26.65926742553711,
149
+ "learning_rate": 1.4375e-05,
150
+ "logits/chosen": -0.37696924805641174,
151
+ "logits/rejected": -0.46783286333084106,
152
+ "logps/chosen": -1.015779733657837,
153
+ "logps/rejected": -1.599536418914795,
154
+ "loss": 24.0592,
155
+ "rewards/accuracies": 0.550000011920929,
156
+ "rewards/chosen": -0.020270783454179764,
157
+ "rewards/margins": 0.019904401153326035,
158
+ "rewards/rejected": -0.0401751883327961,
159
+ "step": 50
160
+ },
161
+ {
162
+ "epoch": 0.043273013375295044,
163
+ "grad_norm": 30.033103942871094,
164
+ "learning_rate": 1.59375e-05,
165
+ "logits/chosen": -0.7907823324203491,
166
+ "logits/rejected": -0.990174412727356,
167
+ "logps/chosen": -1.7764304876327515,
168
+ "logps/rejected": -1.9972736835479736,
169
+ "loss": 24.6893,
170
+ "rewards/accuracies": 0.5249999761581421,
171
+ "rewards/chosen": -0.05663704872131348,
172
+ "rewards/margins": 0.012498116120696068,
173
+ "rewards/rejected": -0.0691351667046547,
174
+ "step": 55
175
+ },
176
+ {
177
+ "epoch": 0.04720692368214005,
178
+ "grad_norm": 155.12327575683594,
179
+ "learning_rate": 1.7500000000000002e-05,
180
+ "logits/chosen": -0.5883419513702393,
181
+ "logits/rejected": -0.9729728698730469,
182
+ "logps/chosen": -1.749121904373169,
183
+ "logps/rejected": -2.5301637649536133,
184
+ "loss": 23.3501,
185
+ "rewards/accuracies": 0.6625000238418579,
186
+ "rewards/chosen": -0.06644631177186966,
187
+ "rewards/margins": 0.04214775934815407,
188
+ "rewards/rejected": -0.10859407484531403,
189
+ "step": 60
190
+ },
191
+ {
192
+ "epoch": 0.05114083398898505,
193
+ "grad_norm": 106.24934387207031,
194
+ "learning_rate": 1.8750000000000002e-05,
195
+ "logits/chosen": -0.8320428133010864,
196
+ "logits/rejected": -1.0106093883514404,
197
+ "logps/chosen": -1.1940712928771973,
198
+ "logps/rejected": -2.7599964141845703,
199
+ "loss": 23.1753,
200
+ "rewards/accuracies": 0.6499999761581421,
201
+ "rewards/chosen": -0.03951374441385269,
202
+ "rewards/margins": 0.08484560251235962,
203
+ "rewards/rejected": -0.12435934692621231,
204
+ "step": 65
205
+ },
206
+ {
207
+ "epoch": 0.05507474429583006,
208
+ "grad_norm": 82.20561981201172,
209
+ "learning_rate": 2.0312500000000002e-05,
210
+ "logits/chosen": -0.9974054098129272,
211
+ "logits/rejected": -1.2483726739883423,
212
+ "logps/chosen": -1.6322723627090454,
213
+ "logps/rejected": -2.4456088542938232,
214
+ "loss": 21.933,
215
+ "rewards/accuracies": 0.7124999761581421,
216
+ "rewards/chosen": -0.07070576399564743,
217
+ "rewards/margins": 0.055024802684783936,
218
+ "rewards/rejected": -0.12573055922985077,
219
+ "step": 70
220
+ },
221
+ {
222
+ "epoch": 0.059008654602675056,
223
+ "grad_norm": 1317.8922119140625,
224
+ "learning_rate": 2.1562500000000002e-05,
225
+ "logits/chosen": -1.0032284259796143,
226
+ "logits/rejected": -1.2543575763702393,
227
+ "logps/chosen": -1.648602843284607,
228
+ "logps/rejected": -3.2052788734436035,
229
+ "loss": 30.9638,
230
+ "rewards/accuracies": 0.625,
231
+ "rewards/chosen": -0.08028480410575867,
232
+ "rewards/margins": 0.09190882742404938,
233
+ "rewards/rejected": -0.17219363152980804,
234
+ "step": 75
235
+ },
236
+ {
237
+ "epoch": 0.06294256490952006,
238
+ "grad_norm": 135.98031616210938,
239
+ "learning_rate": 2.3125000000000003e-05,
240
+ "logits/chosen": -1.1862694025039673,
241
+ "logits/rejected": -1.268090844154358,
242
+ "logps/chosen": -1.7252533435821533,
243
+ "logps/rejected": -2.730776786804199,
244
+ "loss": 23.1291,
245
+ "rewards/accuracies": 0.7124999761581421,
246
+ "rewards/chosen": -0.08682449907064438,
247
+ "rewards/margins": 0.07040676474571228,
248
+ "rewards/rejected": -0.15723127126693726,
249
+ "step": 80
250
+ },
251
+ {
252
+ "epoch": 0.06687647521636507,
253
+ "grad_norm": 83.77543640136719,
254
+ "learning_rate": 2.46875e-05,
255
+ "logits/chosen": -1.4288493394851685,
256
+ "logits/rejected": -1.6324199438095093,
257
+ "logps/chosen": -1.858473539352417,
258
+ "logps/rejected": -2.4925625324249268,
259
+ "loss": 22.47,
260
+ "rewards/accuracies": 0.7124999761581421,
261
+ "rewards/chosen": -0.10322336852550507,
262
+ "rewards/margins": 0.040188394486904144,
263
+ "rewards/rejected": -0.143411785364151,
264
+ "step": 85
265
+ },
266
+ {
267
+ "epoch": 0.07081038552321008,
268
+ "grad_norm": 112.26095581054688,
269
+ "learning_rate": 2.625e-05,
270
+ "logits/chosen": -1.5094571113586426,
271
+ "logits/rejected": -1.6503517627716064,
272
+ "logps/chosen": -2.1784815788269043,
273
+ "logps/rejected": -2.8247978687286377,
274
+ "loss": 26.8652,
275
+ "rewards/accuracies": 0.6625000238418579,
276
+ "rewards/chosen": -0.12195611000061035,
277
+ "rewards/margins": 0.04399397224187851,
278
+ "rewards/rejected": -0.16595008969306946,
279
+ "step": 90
280
+ },
281
+ {
282
+ "epoch": 0.07474429583005507,
283
+ "grad_norm": 116.14981079101562,
284
+ "learning_rate": 2.7812500000000002e-05,
285
+ "logits/chosen": -1.705255150794983,
286
+ "logits/rejected": -1.8774993419647217,
287
+ "logps/chosen": -2.0536391735076904,
288
+ "logps/rejected": -2.913367748260498,
289
+ "loss": 23.1756,
290
+ "rewards/accuracies": 0.637499988079071,
291
+ "rewards/chosen": -0.11939144134521484,
292
+ "rewards/margins": 0.03494938462972641,
293
+ "rewards/rejected": -0.15434083342552185,
294
+ "step": 95
295
+ },
296
+ {
297
+ "epoch": 0.07867820613690008,
298
+ "grad_norm": 171.99806213378906,
299
+ "learning_rate": 2.9375000000000003e-05,
300
+ "logits/chosen": -1.7922956943511963,
301
+ "logits/rejected": -1.805193305015564,
302
+ "logps/chosen": -2.2087619304656982,
303
+ "logps/rejected": -2.953051805496216,
304
+ "loss": 23.3214,
305
+ "rewards/accuracies": 0.6000000238418579,
306
+ "rewards/chosen": -0.1307235062122345,
307
+ "rewards/margins": 0.03435593843460083,
308
+ "rewards/rejected": -0.16507944464683533,
309
+ "step": 100
310
+ },
311
+ {
312
+ "epoch": 0.07867820613690008,
313
+ "eval_logits/chosen": -1.966138243675232,
314
+ "eval_logits/rejected": -2.09938645362854,
315
+ "eval_logps/chosen": -2.3467371463775635,
316
+ "eval_logps/rejected": -2.9913861751556396,
317
+ "eval_loss": 22.537601470947266,
318
+ "eval_rewards/accuracies": 0.643750011920929,
319
+ "eval_rewards/chosen": -0.13215361535549164,
320
+ "eval_rewards/margins": 0.04243787005543709,
321
+ "eval_rewards/rejected": -0.17459148168563843,
322
+ "eval_runtime": 254.2532,
323
+ "eval_samples_per_second": 2.517,
324
+ "eval_steps_per_second": 0.157,
325
+ "step": 100
326
+ },
327
+ {
328
+ "epoch": 0.08261211644374508,
329
+ "grad_norm": 90.37792205810547,
330
+ "learning_rate": 3.09375e-05,
331
+ "logits/chosen": -1.7015612125396729,
332
+ "logits/rejected": -1.8363323211669922,
333
+ "logps/chosen": -2.096281051635742,
334
+ "logps/rejected": -3.1252198219299316,
335
+ "loss": 27.6536,
336
+ "rewards/accuracies": 0.637499988079071,
337
+ "rewards/chosen": -0.11915542930364609,
338
+ "rewards/margins": 0.06005290150642395,
339
+ "rewards/rejected": -0.17920835316181183,
340
+ "step": 105
341
+ },
342
+ {
343
+ "epoch": 0.08654602675059009,
344
+ "grad_norm": 88.4887466430664,
345
+ "learning_rate": 3.2500000000000004e-05,
346
+ "logits/chosen": -1.657274842262268,
347
+ "logits/rejected": -1.8521808385849,
348
+ "logps/chosen": -1.8666527271270752,
349
+ "logps/rejected": -2.9845376014709473,
350
+ "loss": 21.5034,
351
+ "rewards/accuracies": 0.6499999761581421,
352
+ "rewards/chosen": -0.10541417449712753,
353
+ "rewards/margins": 0.05897489935159683,
354
+ "rewards/rejected": -0.16438907384872437,
355
+ "step": 110
356
+ },
357
+ {
358
+ "epoch": 0.0904799370574351,
359
+ "grad_norm": 134.99591064453125,
360
+ "learning_rate": 3.40625e-05,
361
+ "logits/chosen": -1.6925245523452759,
362
+ "logits/rejected": -1.7081615924835205,
363
+ "logps/chosen": -2.6438632011413574,
364
+ "logps/rejected": -3.7139625549316406,
365
+ "loss": 22.8601,
366
+ "rewards/accuracies": 0.625,
367
+ "rewards/chosen": -0.14907808601856232,
368
+ "rewards/margins": 0.05926816537976265,
369
+ "rewards/rejected": -0.20834624767303467,
370
+ "step": 115
371
+ },
372
+ {
373
+ "epoch": 0.0944138473642801,
374
+ "grad_norm": 105.24943542480469,
375
+ "learning_rate": 3.5625000000000005e-05,
376
+ "logits/chosen": -1.591507911682129,
377
+ "logits/rejected": -1.58656907081604,
378
+ "logps/chosen": -1.9171836376190186,
379
+ "logps/rejected": -2.5378520488739014,
380
+ "loss": 23.0976,
381
+ "rewards/accuracies": 0.550000011920929,
382
+ "rewards/chosen": -0.10242807865142822,
383
+ "rewards/margins": 0.03649063780903816,
384
+ "rewards/rejected": -0.1389187127351761,
385
+ "step": 120
386
+ },
387
+ {
388
+ "epoch": 0.0983477576711251,
389
+ "grad_norm": 87.82530212402344,
390
+ "learning_rate": 3.71875e-05,
391
+ "logits/chosen": -1.3108810186386108,
392
+ "logits/rejected": -1.4434765577316284,
393
+ "logps/chosen": -2.124854564666748,
394
+ "logps/rejected": -3.0788867473602295,
395
+ "loss": 24.5017,
396
+ "rewards/accuracies": 0.6000000238418579,
397
+ "rewards/chosen": -0.1214764267206192,
398
+ "rewards/margins": 0.04934501647949219,
399
+ "rewards/rejected": -0.1708214432001114,
400
+ "step": 125
401
+ },
402
+ {
403
+ "epoch": 0.1022816679779701,
404
+ "grad_norm": 69.61011505126953,
405
+ "learning_rate": 3.875e-05,
406
+ "logits/chosen": -1.286228895187378,
407
+ "logits/rejected": -1.5050832033157349,
408
+ "logps/chosen": -2.5508532524108887,
409
+ "logps/rejected": -3.24528169631958,
410
+ "loss": 21.4116,
411
+ "rewards/accuracies": 0.6499999761581421,
412
+ "rewards/chosen": -0.15334758162498474,
413
+ "rewards/margins": 0.04556337743997574,
414
+ "rewards/rejected": -0.19891095161437988,
415
+ "step": 130
416
+ },
417
+ {
418
+ "epoch": 0.10621557828481511,
419
+ "grad_norm": 101.8541259765625,
420
+ "learning_rate": 3.999992445477636e-05,
421
+ "logits/chosen": -1.3636066913604736,
422
+ "logits/rejected": -1.5931237936019897,
423
+ "logps/chosen": -3.0847220420837402,
424
+ "logps/rejected": -3.839167356491089,
425
+ "loss": 21.3367,
426
+ "rewards/accuracies": 0.675000011920929,
427
+ "rewards/chosen": -0.2012835294008255,
428
+ "rewards/margins": 0.05702406167984009,
429
+ "rewards/rejected": -0.2583075761795044,
430
+ "step": 135
431
+ },
432
+ {
433
+ "epoch": 0.11014948859166011,
434
+ "grad_norm": 701.3626098632812,
435
+ "learning_rate": 3.999728043187288e-05,
436
+ "logits/chosen": -1.4217129945755005,
437
+ "logits/rejected": -1.4933011531829834,
438
+ "logps/chosen": -3.9832420349121094,
439
+ "logps/rejected": -5.435095310211182,
440
+ "loss": 23.8821,
441
+ "rewards/accuracies": 0.675000011920929,
442
+ "rewards/chosen": -0.2787173390388489,
443
+ "rewards/margins": 0.09217057377099991,
444
+ "rewards/rejected": -0.3708879351615906,
445
+ "step": 140
446
+ },
447
+ {
448
+ "epoch": 0.11408339889850512,
449
+ "grad_norm": 163.25831604003906,
450
+ "learning_rate": 3.9990859718476166e-05,
451
+ "logits/chosen": -1.4570497274398804,
452
+ "logits/rejected": -1.473787784576416,
453
+ "logps/chosen": -3.3077120780944824,
454
+ "logps/rejected": -4.8310956954956055,
455
+ "loss": 20.1222,
456
+ "rewards/accuracies": 0.637499988079071,
457
+ "rewards/chosen": -0.2514913082122803,
458
+ "rewards/margins": 0.11996223777532578,
459
+ "rewards/rejected": -0.37145355343818665,
460
+ "step": 145
461
+ },
462
+ {
463
+ "epoch": 0.11801730920535011,
464
+ "grad_norm": 274.31146240234375,
465
+ "learning_rate": 3.998066352720348e-05,
466
+ "logits/chosen": -1.4901472330093384,
467
+ "logits/rejected": -1.5641014575958252,
468
+ "logps/chosen": -4.366189956665039,
469
+ "logps/rejected": -5.792475700378418,
470
+ "loss": 24.8947,
471
+ "rewards/accuracies": 0.574999988079071,
472
+ "rewards/chosen": -0.3093239367008209,
473
+ "rewards/margins": 0.1069142073392868,
474
+ "rewards/rejected": -0.4162382185459137,
475
+ "step": 150
476
+ },
477
+ {
478
+ "epoch": 0.12195121951219512,
479
+ "grad_norm": 204.0838623046875,
480
+ "learning_rate": 3.9966693783709596e-05,
481
+ "logits/chosen": -1.775489091873169,
482
+ "logits/rejected": -1.6681087017059326,
483
+ "logps/chosen": -3.3862712383270264,
484
+ "logps/rejected": -3.850006580352783,
485
+ "loss": 25.0897,
486
+ "rewards/accuracies": 0.5625,
487
+ "rewards/chosen": -0.22455720603466034,
488
+ "rewards/margins": 0.03894208371639252,
489
+ "rewards/rejected": -0.26349928975105286,
490
+ "step": 155
491
+ },
492
+ {
493
+ "epoch": 0.12588512981904013,
494
+ "grad_norm": 117.58898162841797,
495
+ "learning_rate": 3.9948953126323144e-05,
496
+ "logits/chosen": -1.7140939235687256,
497
+ "logits/rejected": -1.8748031854629517,
498
+ "logps/chosen": -2.808436632156372,
499
+ "logps/rejected": -3.4621639251708984,
500
+ "loss": 22.909,
501
+ "rewards/accuracies": 0.5874999761581421,
502
+ "rewards/chosen": -0.17321309447288513,
503
+ "rewards/margins": 0.03799115866422653,
504
+ "rewards/rejected": -0.21120426058769226,
505
+ "step": 160
506
+ },
507
+ {
508
+ "epoch": 0.12981904012588513,
509
+ "grad_norm": 75.52522277832031,
510
+ "learning_rate": 3.992744490554832e-05,
511
+ "logits/chosen": -1.5584402084350586,
512
+ "logits/rejected": -1.6980777978897095,
513
+ "logps/chosen": -2.5916876792907715,
514
+ "logps/rejected": -3.250844955444336,
515
+ "loss": 22.8805,
516
+ "rewards/accuracies": 0.574999988079071,
517
+ "rewards/chosen": -0.16778233647346497,
518
+ "rewards/margins": 0.046813301742076874,
519
+ "rewards/rejected": -0.21459563076496124,
520
+ "step": 165
521
+ },
522
+ {
523
+ "epoch": 0.13375295043273014,
524
+ "grad_norm": 210.5413818359375,
525
+ "learning_rate": 3.990217318343214e-05,
526
+ "logits/chosen": -1.6046726703643799,
527
+ "logits/rejected": -1.785196304321289,
528
+ "logps/chosen": -3.144779920578003,
529
+ "logps/rejected": -4.314496040344238,
530
+ "loss": 21.1638,
531
+ "rewards/accuracies": 0.625,
532
+ "rewards/chosen": -0.2154751569032669,
533
+ "rewards/margins": 0.07778888940811157,
534
+ "rewards/rejected": -0.2932640314102173,
535
+ "step": 170
536
+ },
537
+ {
538
+ "epoch": 0.13768686073957515,
539
+ "grad_norm": 137.43014526367188,
540
+ "learning_rate": 3.987314273279721e-05,
541
+ "logits/chosen": -1.538189172744751,
542
+ "logits/rejected": -1.7611982822418213,
543
+ "logps/chosen": -3.314256191253662,
544
+ "logps/rejected": -4.361076354980469,
545
+ "loss": 22.1568,
546
+ "rewards/accuracies": 0.675000011920929,
547
+ "rewards/chosen": -0.24516558647155762,
548
+ "rewards/margins": 0.08248710632324219,
549
+ "rewards/rejected": -0.3276526927947998,
550
+ "step": 175
551
+ },
552
+ {
553
+ "epoch": 0.14162077104642015,
554
+ "grad_norm": 162.08778381347656,
555
+ "learning_rate": 3.9840359036340424e-05,
556
+ "logits/chosen": -1.5785366296768188,
557
+ "logits/rejected": -1.6759214401245117,
558
+ "logps/chosen": -3.9628891944885254,
559
+ "logps/rejected": -4.663954257965088,
560
+ "loss": 23.3996,
561
+ "rewards/accuracies": 0.6875,
562
+ "rewards/chosen": -0.2786514163017273,
563
+ "rewards/margins": 0.06107773259282112,
564
+ "rewards/rejected": -0.3397291302680969,
565
+ "step": 180
566
+ },
567
+ {
568
+ "epoch": 0.14555468135326516,
569
+ "grad_norm": 183.8037567138672,
570
+ "learning_rate": 3.980382828559743e-05,
571
+ "logits/chosen": -1.8036388158798218,
572
+ "logits/rejected": -1.9109561443328857,
573
+ "logps/chosen": -4.924368381500244,
574
+ "logps/rejected": -5.794642448425293,
575
+ "loss": 22.9192,
576
+ "rewards/accuracies": 0.6625000238418579,
577
+ "rewards/chosen": -0.357871949672699,
578
+ "rewards/margins": 0.06735062599182129,
579
+ "rewards/rejected": -0.42522257566452026,
580
+ "step": 185
581
+ },
582
+ {
583
+ "epoch": 0.14948859166011014,
584
+ "grad_norm": 137.1122283935547,
585
+ "learning_rate": 3.9763557379773316e-05,
586
+ "logits/chosen": -1.7101930379867554,
587
+ "logits/rejected": -1.8198864459991455,
588
+ "logps/chosen": -3.584138870239258,
589
+ "logps/rejected": -4.54425048828125,
590
+ "loss": 20.9665,
591
+ "rewards/accuracies": 0.625,
592
+ "rewards/chosen": -0.27856582403182983,
593
+ "rewards/margins": 0.0747852548956871,
594
+ "rewards/rejected": -0.3533511161804199,
595
+ "step": 190
596
+ },
597
+ {
598
+ "epoch": 0.15342250196695514,
599
+ "grad_norm": 164.39016723632812,
600
+ "learning_rate": 3.971955392443965e-05,
601
+ "logits/chosen": -1.697361707687378,
602
+ "logits/rejected": -1.7193193435668945,
603
+ "logps/chosen": -3.8646721839904785,
604
+ "logps/rejected": -5.100863456726074,
605
+ "loss": 21.2867,
606
+ "rewards/accuracies": 0.6625000238418579,
607
+ "rewards/chosen": -0.29381299018859863,
608
+ "rewards/margins": 0.07520242035388947,
609
+ "rewards/rejected": -0.3690153658390045,
610
+ "step": 195
611
+ },
612
+ {
613
+ "epoch": 0.15735641227380015,
614
+ "grad_norm": 125.37354278564453,
615
+ "learning_rate": 3.9671826230098045e-05,
616
+ "logits/chosen": -1.6001428365707397,
617
+ "logits/rejected": -1.736572504043579,
618
+ "logps/chosen": -3.7506911754608154,
619
+ "logps/rejected": -4.7256364822387695,
620
+ "loss": 21.3918,
621
+ "rewards/accuracies": 0.612500011920929,
622
+ "rewards/chosen": -0.2859867215156555,
623
+ "rewards/margins": 0.07923749834299088,
624
+ "rewards/rejected": -0.3652242124080658,
625
+ "step": 200
626
+ },
627
+ {
628
+ "epoch": 0.15735641227380015,
629
+ "eval_logits/chosen": -1.5953483581542969,
630
+ "eval_logits/rejected": -1.725934624671936,
631
+ "eval_logps/chosen": -4.086066246032715,
632
+ "eval_logps/rejected": -5.129216194152832,
633
+ "eval_loss": 23.703336715698242,
634
+ "eval_rewards/accuracies": 0.6546875238418579,
635
+ "eval_rewards/chosen": -0.3060864806175232,
636
+ "eval_rewards/margins": 0.0822879821062088,
637
+ "eval_rewards/rejected": -0.3883745074272156,
638
+ "eval_runtime": 254.3055,
639
+ "eval_samples_per_second": 2.517,
640
+ "eval_steps_per_second": 0.157,
641
+ "step": 200
642
+ },
643
+ {
644
+ "epoch": 0.16129032258064516,
645
+ "grad_norm": 167.259033203125,
646
+ "learning_rate": 3.962038331061065e-05,
647
+ "logits/chosen": -1.4170001745224,
648
+ "logits/rejected": -1.6189939975738525,
649
+ "logps/chosen": -3.634209156036377,
650
+ "logps/rejected": -5.206550121307373,
651
+ "loss": 26.5886,
652
+ "rewards/accuracies": 0.6000000238418579,
653
+ "rewards/chosen": -0.2781711220741272,
654
+ "rewards/margins": 0.10549378395080566,
655
+ "rewards/rejected": -0.38366490602493286,
656
+ "step": 205
657
+ },
658
+ {
659
+ "epoch": 0.16522423288749016,
660
+ "grad_norm": 109.77017211914062,
661
+ "learning_rate": 3.9565234881497835e-05,
662
+ "logits/chosen": -1.5879325866699219,
663
+ "logits/rejected": -1.6509323120117188,
664
+ "logps/chosen": -2.8328652381896973,
665
+ "logps/rejected": -3.345362901687622,
666
+ "loss": 22.6272,
667
+ "rewards/accuracies": 0.6875,
668
+ "rewards/chosen": -0.20151302218437195,
669
+ "rewards/margins": 0.04314180836081505,
670
+ "rewards/rejected": -0.2446548491716385,
671
+ "step": 210
672
+ },
673
+ {
674
+ "epoch": 0.16915814319433517,
675
+ "grad_norm": 162.56524658203125,
676
+ "learning_rate": 3.950639135810326e-05,
677
+ "logits/chosen": -1.6067664623260498,
678
+ "logits/rejected": -1.7900478839874268,
679
+ "logps/chosen": -3.400160312652588,
680
+ "logps/rejected": -4.594171047210693,
681
+ "loss": 20.8583,
682
+ "rewards/accuracies": 0.699999988079071,
683
+ "rewards/chosen": -0.2197611778974533,
684
+ "rewards/margins": 0.0824299305677414,
685
+ "rewards/rejected": -0.3021911084651947,
686
+ "step": 215
687
+ },
688
+ {
689
+ "epoch": 0.17309205350118018,
690
+ "grad_norm": 134.93789672851562,
691
+ "learning_rate": 3.944386385362683e-05,
692
+ "logits/chosen": -1.7304567098617554,
693
+ "logits/rejected": -1.7651231288909912,
694
+ "logps/chosen": -4.1914567947387695,
695
+ "logps/rejected": -5.027632713317871,
696
+ "loss": 21.3514,
697
+ "rewards/accuracies": 0.699999988079071,
698
+ "rewards/chosen": -0.2970493733882904,
699
+ "rewards/margins": 0.06264514476060867,
700
+ "rewards/rejected": -0.3596945106983185,
701
+ "step": 220
702
+ },
703
+ {
704
+ "epoch": 0.17702596380802518,
705
+ "grad_norm": 106.4392318725586,
706
+ "learning_rate": 3.937766417702591e-05,
707
+ "logits/chosen": -1.645422339439392,
708
+ "logits/rejected": -1.7480659484863281,
709
+ "logps/chosen": -4.8556694984436035,
710
+ "logps/rejected": -5.551773548126221,
711
+ "loss": 25.2991,
712
+ "rewards/accuracies": 0.574999988079071,
713
+ "rewards/chosen": -0.39037808775901794,
714
+ "rewards/margins": 0.040518540889024734,
715
+ "rewards/rejected": -0.4308966100215912,
716
+ "step": 225
717
+ },
718
+ {
719
+ "epoch": 0.1809598741148702,
720
+ "grad_norm": 75.40190124511719,
721
+ "learning_rate": 3.9307804830785033e-05,
722
+ "logits/chosen": -1.710780382156372,
723
+ "logits/rejected": -1.759790062904358,
724
+ "logps/chosen": -4.1053643226623535,
725
+ "logps/rejected": -5.580018520355225,
726
+ "loss": 20.5995,
727
+ "rewards/accuracies": 0.6499999761581421,
728
+ "rewards/chosen": -0.32871341705322266,
729
+ "rewards/margins": 0.09709561616182327,
730
+ "rewards/rejected": -0.42580899596214294,
731
+ "step": 230
732
+ },
733
+ {
734
+ "epoch": 0.1848937844217152,
735
+ "grad_norm": 129.5159912109375,
736
+ "learning_rate": 3.923429900855468e-05,
737
+ "logits/chosen": -1.5250613689422607,
738
+ "logits/rejected": -1.7375835180282593,
739
+ "logps/chosen": -3.8208870887756348,
740
+ "logps/rejected": -5.471742630004883,
741
+ "loss": 19.4905,
742
+ "rewards/accuracies": 0.675000011920929,
743
+ "rewards/chosen": -0.2826174199581146,
744
+ "rewards/margins": 0.10505588352680206,
745
+ "rewards/rejected": -0.3876733183860779,
746
+ "step": 235
747
+ },
748
+ {
749
+ "epoch": 0.1888276947285602,
750
+ "grad_norm": 358.99334716796875,
751
+ "learning_rate": 3.915716059265956e-05,
752
+ "logits/chosen": -1.2919423580169678,
753
+ "logits/rejected": -1.5058711767196655,
754
+ "logps/chosen": -4.358604907989502,
755
+ "logps/rejected": -5.214530944824219,
756
+ "loss": 21.5458,
757
+ "rewards/accuracies": 0.675000011920929,
758
+ "rewards/chosen": -0.30291062593460083,
759
+ "rewards/margins": 0.06571656465530396,
760
+ "rewards/rejected": -0.3686271905899048,
761
+ "step": 240
762
+ },
763
+ {
764
+ "epoch": 0.19276160503540518,
765
+ "grad_norm": 83.99627685546875,
766
+ "learning_rate": 3.907640415147675e-05,
767
+ "logits/chosen": -1.1905521154403687,
768
+ "logits/rejected": -1.4401142597198486,
769
+ "logps/chosen": -3.447228193283081,
770
+ "logps/rejected": -4.264595985412598,
771
+ "loss": 21.469,
772
+ "rewards/accuracies": 0.6875,
773
+ "rewards/chosen": -0.2503766417503357,
774
+ "rewards/margins": 0.06672003120183945,
775
+ "rewards/rejected": -0.31709665060043335,
776
+ "step": 245
777
+ },
778
+ {
779
+ "epoch": 0.1966955153422502,
780
+ "grad_norm": 97.8551025390625,
781
+ "learning_rate": 3.8992044936684326e-05,
782
+ "logits/chosen": -1.167415976524353,
783
+ "logits/rejected": -1.3312307596206665,
784
+ "logps/chosen": -3.2072510719299316,
785
+ "logps/rejected": -3.7459397315979004,
786
+ "loss": 24.394,
787
+ "rewards/accuracies": 0.5625,
788
+ "rewards/chosen": -0.23177361488342285,
789
+ "rewards/margins": 0.04146546125411987,
790
+ "rewards/rejected": -0.2732390761375427,
791
+ "step": 250
792
+ },
793
+ {
794
+ "epoch": 0.2006294256490952,
795
+ "grad_norm": 81.79573822021484,
796
+ "learning_rate": 3.8904098880380946e-05,
797
+ "logits/chosen": -1.0507287979125977,
798
+ "logits/rejected": -1.1515988111495972,
799
+ "logps/chosen": -2.6618685722351074,
800
+ "logps/rejected": -3.6504600048065186,
801
+ "loss": 21.5489,
802
+ "rewards/accuracies": 0.6000000238418579,
803
+ "rewards/chosen": -0.17464396357536316,
804
+ "rewards/margins": 0.08120186626911163,
805
+ "rewards/rejected": -0.2558458149433136,
806
+ "step": 255
807
+ },
808
+ {
809
+ "epoch": 0.2045633359559402,
810
+ "grad_norm": 74.57398986816406,
811
+ "learning_rate": 3.881258259207688e-05,
812
+ "logits/chosen": -1.026132583618164,
813
+ "logits/rejected": -1.1835418939590454,
814
+ "logps/chosen": -3.0442395210266113,
815
+ "logps/rejected": -3.4265129566192627,
816
+ "loss": 24.2589,
817
+ "rewards/accuracies": 0.5874999761581421,
818
+ "rewards/chosen": -0.18103419244289398,
819
+ "rewards/margins": 0.02801087498664856,
820
+ "rewards/rejected": -0.20904505252838135,
821
+ "step": 260
822
+ },
823
+ {
824
+ "epoch": 0.2084972462627852,
825
+ "grad_norm": 136.9661102294922,
826
+ "learning_rate": 3.8717513355557156e-05,
827
+ "logits/chosen": -1.0285115242004395,
828
+ "logits/rejected": -1.2167742252349854,
829
+ "logps/chosen": -2.512547016143799,
830
+ "logps/rejected": -3.510410785675049,
831
+ "loss": 22.8105,
832
+ "rewards/accuracies": 0.7749999761581421,
833
+ "rewards/chosen": -0.1588296890258789,
834
+ "rewards/margins": 0.07337381690740585,
835
+ "rewards/rejected": -0.23220351338386536,
836
+ "step": 265
837
+ },
838
+ {
839
+ "epoch": 0.21243115656963021,
840
+ "grad_norm": 69.99120330810547,
841
+ "learning_rate": 3.861890912561731e-05,
842
+ "logits/chosen": -0.8553465604782104,
843
+ "logits/rejected": -1.1523014307022095,
844
+ "logps/chosen": -2.420487880706787,
845
+ "logps/rejected": -3.4689393043518066,
846
+ "loss": 20.5014,
847
+ "rewards/accuracies": 0.6499999761581421,
848
+ "rewards/chosen": -0.15432177484035492,
849
+ "rewards/margins": 0.08016739785671234,
850
+ "rewards/rejected": -0.23448920249938965,
851
+ "step": 270
852
+ },
853
+ {
854
+ "epoch": 0.21636506687647522,
855
+ "grad_norm": 75.06553649902344,
856
+ "learning_rate": 3.85167885246725e-05,
857
+ "logits/chosen": -1.0096137523651123,
858
+ "logits/rejected": -0.9508267641067505,
859
+ "logps/chosen": -3.4687724113464355,
860
+ "logps/rejected": -4.444920539855957,
861
+ "loss": 22.2787,
862
+ "rewards/accuracies": 0.637499988079071,
863
+ "rewards/chosen": -0.2557123303413391,
864
+ "rewards/margins": 0.07582716643810272,
865
+ "rewards/rejected": -0.3315395414829254,
866
+ "step": 275
867
+ },
868
+ {
869
+ "epoch": 0.22029897718332023,
870
+ "grad_norm": 80.17613983154297,
871
+ "learning_rate": 3.8411170839240394e-05,
872
+ "logits/chosen": -0.9037753939628601,
873
+ "logits/rejected": -0.9584333300590515,
874
+ "logps/chosen": -3.32795786857605,
875
+ "logps/rejected": -4.637690544128418,
876
+ "loss": 23.2314,
877
+ "rewards/accuracies": 0.612500011920929,
878
+ "rewards/chosen": -0.2455664873123169,
879
+ "rewards/margins": 0.09604751318693161,
880
+ "rewards/rejected": -0.3416139781475067,
881
+ "step": 280
882
+ },
883
+ {
884
+ "epoch": 0.22423288749016523,
885
+ "grad_norm": 51.01013946533203,
886
+ "learning_rate": 3.8302076016298786e-05,
887
+ "logits/chosen": -0.7821402549743652,
888
+ "logits/rejected": -0.8784409761428833,
889
+ "logps/chosen": -3.464825391769409,
890
+ "logps/rejected": -4.3120927810668945,
891
+ "loss": 26.3572,
892
+ "rewards/accuracies": 0.5874999761581421,
893
+ "rewards/chosen": -0.22129957377910614,
894
+ "rewards/margins": 0.05804131552577019,
895
+ "rewards/rejected": -0.27934086322784424,
896
+ "step": 285
897
+ },
898
+ {
899
+ "epoch": 0.22816679779701024,
900
+ "grad_norm": 50.36003494262695,
901
+ "learning_rate": 3.818952465951836e-05,
902
+ "logits/chosen": -0.8527859449386597,
903
+ "logits/rejected": -1.0032708644866943,
904
+ "logps/chosen": -2.805267810821533,
905
+ "logps/rejected": -3.6299731731414795,
906
+ "loss": 22.6246,
907
+ "rewards/accuracies": 0.625,
908
+ "rewards/chosen": -0.19116182625293732,
909
+ "rewards/margins": 0.04059046879410744,
910
+ "rewards/rejected": -0.23175227642059326,
911
+ "step": 290
912
+ },
913
+ {
914
+ "epoch": 0.23210070810385522,
915
+ "grad_norm": 73.20655059814453,
916
+ "learning_rate": 3.80735380253715e-05,
917
+ "logits/chosen": -1.153649926185608,
918
+ "logits/rejected": -1.3198211193084717,
919
+ "logps/chosen": -3.344242811203003,
920
+ "logps/rejected": -3.948678970336914,
921
+ "loss": 23.0667,
922
+ "rewards/accuracies": 0.6000000238418579,
923
+ "rewards/chosen": -0.24411065876483917,
924
+ "rewards/margins": 0.03483257442712784,
925
+ "rewards/rejected": -0.2789432406425476,
926
+ "step": 295
927
+ },
928
+ {
929
+ "epoch": 0.23603461841070023,
930
+ "grad_norm": 74.69874572753906,
931
+ "learning_rate": 3.7954138019117764e-05,
932
+ "logits/chosen": -1.2777029275894165,
933
+ "logits/rejected": -1.4231250286102295,
934
+ "logps/chosen": -3.8555595874786377,
935
+ "logps/rejected": -4.460744380950928,
936
+ "loss": 23.7264,
937
+ "rewards/accuracies": 0.5625,
938
+ "rewards/chosen": -0.2938821017742157,
939
+ "rewards/margins": 0.04352904483675957,
940
+ "rewards/rejected": -0.33741116523742676,
941
+ "step": 300
942
+ },
943
+ {
944
+ "epoch": 0.23603461841070023,
945
+ "eval_logits/chosen": -1.299351453781128,
946
+ "eval_logits/rejected": -1.5808088779449463,
947
+ "eval_logps/chosen": -3.9815101623535156,
948
+ "eval_logps/rejected": -4.813979148864746,
949
+ "eval_loss": 22.46109962463379,
950
+ "eval_rewards/accuracies": 0.6578124761581421,
951
+ "eval_rewards/chosen": -0.29563087224960327,
952
+ "eval_rewards/margins": 0.06121987849473953,
953
+ "eval_rewards/rejected": -0.3568507432937622,
954
+ "eval_runtime": 256.5735,
955
+ "eval_samples_per_second": 2.494,
956
+ "eval_steps_per_second": 0.156,
957
+ "step": 300
958
+ },
959
+ {
960
+ "epoch": 0.23996852871754523,
961
+ "grad_norm": 100.70768737792969,
962
+ "learning_rate": 3.7831347190666886e-05,
963
+ "logits/chosen": -1.4278929233551025,
964
+ "logits/rejected": -1.5912885665893555,
965
+ "logps/chosen": -4.3169355392456055,
966
+ "logps/rejected": -5.223280906677246,
967
+ "loss": 21.9094,
968
+ "rewards/accuracies": 0.6625000238418579,
969
+ "rewards/chosen": -0.32976609468460083,
970
+ "rewards/margins": 0.05834154412150383,
971
+ "rewards/rejected": -0.38810762763023376,
972
+ "step": 305
973
+ },
974
+ {
975
+ "epoch": 0.24390243902439024,
976
+ "grad_norm": 220.49977111816406,
977
+ "learning_rate": 3.770518873031997e-05,
978
+ "logits/chosen": -1.3963868618011475,
979
+ "logits/rejected": -1.5353944301605225,
980
+ "logps/chosen": -4.6622209548950195,
981
+ "logps/rejected": -5.298556327819824,
982
+ "loss": 25.9132,
983
+ "rewards/accuracies": 0.5874999761581421,
984
+ "rewards/chosen": -0.34076324105262756,
985
+ "rewards/margins": 0.03211987391114235,
986
+ "rewards/rejected": -0.3728831112384796,
987
+ "step": 310
988
+ },
989
+ {
990
+ "epoch": 0.24783634933123525,
991
+ "grad_norm": 45.653079986572266,
992
+ "learning_rate": 3.757568646438977e-05,
993
+ "logits/chosen": -1.3604671955108643,
994
+ "logits/rejected": -1.4712865352630615,
995
+ "logps/chosen": -4.468562126159668,
996
+ "logps/rejected": -5.09032678604126,
997
+ "loss": 23.4367,
998
+ "rewards/accuracies": 0.550000011920929,
999
+ "rewards/chosen": -0.32228899002075195,
1000
+ "rewards/margins": 0.037079013884067535,
1001
+ "rewards/rejected": -0.3593679964542389,
1002
+ "step": 315
1003
+ },
1004
+ {
1005
+ "epoch": 0.25177025963808025,
1006
+ "grad_norm": 66.92400360107422,
1007
+ "learning_rate": 3.744286485070085e-05,
1008
+ "logits/chosen": -1.082240343093872,
1009
+ "logits/rejected": -1.4719394445419312,
1010
+ "logps/chosen": -4.237619876861572,
1011
+ "logps/rejected": -5.15458345413208,
1012
+ "loss": 23.5263,
1013
+ "rewards/accuracies": 0.6875,
1014
+ "rewards/chosen": -0.33307066559791565,
1015
+ "rewards/margins": 0.0633757933974266,
1016
+ "rewards/rejected": -0.39644646644592285,
1017
+ "step": 320
1018
+ },
1019
+ {
1020
+ "epoch": 0.25570416994492523,
1021
+ "grad_norm": 177.88168334960938,
1022
+ "learning_rate": 3.730674897397048e-05,
1023
+ "logits/chosen": -1.114916443824768,
1024
+ "logits/rejected": -1.6648147106170654,
1025
+ "logps/chosen": -4.053045749664307,
1026
+ "logps/rejected": -5.189083099365234,
1027
+ "loss": 22.1226,
1028
+ "rewards/accuracies": 0.6875,
1029
+ "rewards/chosen": -0.3164909780025482,
1030
+ "rewards/margins": 0.07537268847227097,
1031
+ "rewards/rejected": -0.3918636739253998,
1032
+ "step": 325
1033
+ },
1034
+ {
1035
+ "epoch": 0.25963808025177026,
1036
+ "grad_norm": 100.30528259277344,
1037
+ "learning_rate": 3.7167364541071115e-05,
1038
+ "logits/chosen": -0.988497257232666,
1039
+ "logits/rejected": -1.1937472820281982,
1040
+ "logps/chosen": -4.265296459197998,
1041
+ "logps/rejected": -4.853640556335449,
1042
+ "loss": 20.9283,
1043
+ "rewards/accuracies": 0.6625000238418579,
1044
+ "rewards/chosen": -0.3129270374774933,
1045
+ "rewards/margins": 0.0673552006483078,
1046
+ "rewards/rejected": -0.3802822232246399,
1047
+ "step": 330
1048
+ },
1049
+ {
1050
+ "epoch": 0.26357199055861524,
1051
+ "grad_norm": 83.86753845214844,
1052
+ "learning_rate": 3.7024737876175406e-05,
1053
+ "logits/chosen": -0.8350197076797485,
1054
+ "logits/rejected": -1.1670969724655151,
1055
+ "logps/chosen": -5.4864630699157715,
1056
+ "logps/rejected": -6.544106960296631,
1057
+ "loss": 20.1224,
1058
+ "rewards/accuracies": 0.6625000238418579,
1059
+ "rewards/chosen": -0.4627310335636139,
1060
+ "rewards/margins": 0.0860290378332138,
1061
+ "rewards/rejected": -0.5487600564956665,
1062
+ "step": 335
1063
+ },
1064
+ {
1065
+ "epoch": 0.2675059008654603,
1066
+ "grad_norm": 73.52009582519531,
1067
+ "learning_rate": 3.6878895915784616e-05,
1068
+ "logits/chosen": -0.6294984221458435,
1069
+ "logits/rejected": -0.6951633095741272,
1070
+ "logps/chosen": -6.682524681091309,
1071
+ "logps/rejected": -7.499720573425293,
1072
+ "loss": 23.3106,
1073
+ "rewards/accuracies": 0.574999988079071,
1074
+ "rewards/chosen": -0.576111912727356,
1075
+ "rewards/margins": 0.06401752680540085,
1076
+ "rewards/rejected": -0.6401294469833374,
1077
+ "step": 340
1078
+ },
1079
+ {
1080
+ "epoch": 0.27143981117230526,
1081
+ "grad_norm": 47.29294204711914,
1082
+ "learning_rate": 3.6729866203641346e-05,
1083
+ "logits/chosen": -0.30728015303611755,
1084
+ "logits/rejected": -0.7238900065422058,
1085
+ "logps/chosen": -5.2912421226501465,
1086
+ "logps/rejected": -6.754895210266113,
1087
+ "loss": 20.956,
1088
+ "rewards/accuracies": 0.6625000238418579,
1089
+ "rewards/chosen": -0.44258102774620056,
1090
+ "rewards/margins": 0.09092569351196289,
1091
+ "rewards/rejected": -0.5335067510604858,
1092
+ "step": 345
1093
+ },
1094
+ {
1095
+ "epoch": 0.2753737214791503,
1096
+ "grad_norm": 92.66437530517578,
1097
+ "learning_rate": 3.6577676885527676e-05,
1098
+ "logits/chosen": -0.4043883681297302,
1099
+ "logits/rejected": -0.6512165069580078,
1100
+ "logps/chosen": -4.932716369628906,
1101
+ "logps/rejected": -6.014449596405029,
1102
+ "loss": 19.8567,
1103
+ "rewards/accuracies": 0.7124999761581421,
1104
+ "rewards/chosen": -0.39097142219543457,
1105
+ "rewards/margins": 0.09061526507139206,
1106
+ "rewards/rejected": -0.48158663511276245,
1107
+ "step": 350
1108
+ },
1109
+ {
1110
+ "epoch": 0.27930763178599527,
1111
+ "grad_norm": 147.38983154296875,
1112
+ "learning_rate": 3.6422356703949525e-05,
1113
+ "logits/chosen": -0.1327817142009735,
1114
+ "logits/rejected": -0.62315833568573,
1115
+ "logps/chosen": -5.30325984954834,
1116
+ "logps/rejected": -6.832304954528809,
1117
+ "loss": 21.7519,
1118
+ "rewards/accuracies": 0.675000011920929,
1119
+ "rewards/chosen": -0.4314250946044922,
1120
+ "rewards/margins": 0.1263391077518463,
1121
+ "rewards/rejected": -0.5577641725540161,
1122
+ "step": 355
1123
+ },
1124
+ {
1125
+ "epoch": 0.2832415420928403,
1126
+ "grad_norm": 31.5225772857666,
1127
+ "learning_rate": 3.62639349927083e-05,
1128
+ "logits/chosen": -0.2402796745300293,
1129
+ "logits/rejected": -0.8563323020935059,
1130
+ "logps/chosen": -4.622067928314209,
1131
+ "logps/rejected": -6.146195411682129,
1132
+ "loss": 17.5716,
1133
+ "rewards/accuracies": 0.7749999761581421,
1134
+ "rewards/chosen": -0.37094250321388245,
1135
+ "rewards/margins": 0.14315392076969147,
1136
+ "rewards/rejected": -0.5140964388847351,
1137
+ "step": 360
1138
+ },
1139
+ {
1140
+ "epoch": 0.2871754523996853,
1141
+ "grad_norm": 62.2461051940918,
1142
+ "learning_rate": 3.610244167136095e-05,
1143
+ "logits/chosen": -0.09466689825057983,
1144
+ "logits/rejected": -0.4544064402580261,
1145
+ "logps/chosen": -5.546882629394531,
1146
+ "logps/rejected": -6.525860786437988,
1147
+ "loss": 24.1383,
1148
+ "rewards/accuracies": 0.637499988079071,
1149
+ "rewards/chosen": -0.47001561522483826,
1150
+ "rewards/margins": 0.06077291816473007,
1151
+ "rewards/rejected": -0.5307885408401489,
1152
+ "step": 365
1153
+ },
1154
+ {
1155
+ "epoch": 0.2911093627065303,
1156
+ "grad_norm": 48.28466796875,
1157
+ "learning_rate": 3.593790723956935e-05,
1158
+ "logits/chosen": -0.2374683916568756,
1159
+ "logits/rejected": -0.373137503862381,
1160
+ "logps/chosen": -8.164457321166992,
1161
+ "logps/rejected": -8.156596183776855,
1162
+ "loss": 27.2267,
1163
+ "rewards/accuracies": 0.5625,
1164
+ "rewards/chosen": -0.7038506269454956,
1165
+ "rewards/margins": 0.0024669456761330366,
1166
+ "rewards/rejected": -0.7063175439834595,
1167
+ "step": 370
1168
+ },
1169
+ {
1170
+ "epoch": 0.2950432730133753,
1171
+ "grad_norm": 51.48979568481445,
1172
+ "learning_rate": 3.577036277134012e-05,
1173
+ "logits/chosen": 0.5883103609085083,
1174
+ "logits/rejected": 0.345896452665329,
1175
+ "logps/chosen": -7.786595344543457,
1176
+ "logps/rejected": -8.453828811645508,
1177
+ "loss": 23.7109,
1178
+ "rewards/accuracies": 0.612500011920929,
1179
+ "rewards/chosen": -0.6850046515464783,
1180
+ "rewards/margins": 0.03222974017262459,
1181
+ "rewards/rejected": -0.7172344326972961,
1182
+ "step": 375
1183
+ },
1184
+ {
1185
+ "epoch": 0.2989771833202203,
1186
+ "grad_norm": 47.203521728515625,
1187
+ "learning_rate": 3.5599839909155954e-05,
1188
+ "logits/chosen": 0.8187308311462402,
1189
+ "logits/rejected": 0.5813020467758179,
1190
+ "logps/chosen": -7.474339962005615,
1191
+ "logps/rejected": -8.487818717956543,
1192
+ "loss": 21.9183,
1193
+ "rewards/accuracies": 0.637499988079071,
1194
+ "rewards/chosen": -0.649897038936615,
1195
+ "rewards/margins": 0.06481163203716278,
1196
+ "rewards/rejected": -0.714708685874939,
1197
+ "step": 380
1198
+ },
1199
+ {
1200
+ "epoch": 0.3029110936270653,
1201
+ "grad_norm": 66.43785095214844,
1202
+ "learning_rate": 3.542637085799967e-05,
1203
+ "logits/chosen": 0.7243896722793579,
1204
+ "logits/rejected": 0.633999228477478,
1205
+ "logps/chosen": -6.237640380859375,
1206
+ "logps/rejected": -7.760465145111084,
1207
+ "loss": 21.8449,
1208
+ "rewards/accuracies": 0.6875,
1209
+ "rewards/chosen": -0.5326268672943115,
1210
+ "rewards/margins": 0.1084168553352356,
1211
+ "rewards/rejected": -0.6410436630249023,
1212
+ "step": 385
1213
+ },
1214
+ {
1215
+ "epoch": 0.3068450039339103,
1216
+ "grad_norm": 37.43156814575195,
1217
+ "learning_rate": 3.524998837927192e-05,
1218
+ "logits/chosen": 0.06674204766750336,
1219
+ "logits/rejected": -0.2028542459011078,
1220
+ "logps/chosen": -3.726958751678467,
1221
+ "logps/rejected": -4.262044906616211,
1222
+ "loss": 24.4969,
1223
+ "rewards/accuracies": 0.637499988079071,
1224
+ "rewards/chosen": -0.28034740686416626,
1225
+ "rewards/margins": 0.04259229078888893,
1226
+ "rewards/rejected": -0.3229396939277649,
1227
+ "step": 390
1228
+ },
1229
+ {
1230
+ "epoch": 0.3107789142407553,
1231
+ "grad_norm": 43.61391067504883,
1232
+ "learning_rate": 3.5070725784603906e-05,
1233
+ "logits/chosen": -0.40263357758522034,
1234
+ "logits/rejected": -0.7386992573738098,
1235
+ "logps/chosen": -2.9942004680633545,
1236
+ "logps/rejected": -3.7471251487731934,
1237
+ "loss": 22.408,
1238
+ "rewards/accuracies": 0.637499988079071,
1239
+ "rewards/chosen": -0.19999414682388306,
1240
+ "rewards/margins": 0.05711622163653374,
1241
+ "rewards/rejected": -0.2571103572845459,
1242
+ "step": 395
1243
+ },
1244
+ {
1245
+ "epoch": 0.3147128245476003,
1246
+ "grad_norm": 39.55924606323242,
1247
+ "learning_rate": 3.488861692956612e-05,
1248
+ "logits/chosen": -0.3814232647418976,
1249
+ "logits/rejected": -0.7326269149780273,
1250
+ "logps/chosen": -3.743687391281128,
1251
+ "logps/rejected": -4.247666835784912,
1252
+ "loss": 22.5808,
1253
+ "rewards/accuracies": 0.637499988079071,
1254
+ "rewards/chosen": -0.2401810586452484,
1255
+ "rewards/margins": 0.03669751435518265,
1256
+ "rewards/rejected": -0.27687856554985046,
1257
+ "step": 400
1258
+ },
1259
+ {
1260
+ "epoch": 0.3147128245476003,
1261
+ "eval_logits/chosen": 1.05367910861969,
1262
+ "eval_logits/rejected": 0.8413508534431458,
1263
+ "eval_logps/chosen": -3.476516008377075,
1264
+ "eval_logps/rejected": -4.1527628898620605,
1265
+ "eval_loss": 22.322988510131836,
1266
+ "eval_rewards/accuracies": 0.643750011920929,
1267
+ "eval_rewards/chosen": -0.2451314926147461,
1268
+ "eval_rewards/margins": 0.045597635209560394,
1269
+ "eval_rewards/rejected": -0.2907291054725647,
1270
+ "eval_runtime": 262.6324,
1271
+ "eval_samples_per_second": 2.437,
1272
+ "eval_steps_per_second": 0.152,
1273
+ "step": 400
1274
+ },
1275
+ {
1276
+ "epoch": 0.31864673485444533,
1277
+ "grad_norm": 106.93277740478516,
1278
+ "learning_rate": 3.470369620727433e-05,
1279
+ "logits/chosen": -0.2646043300628662,
1280
+ "logits/rejected": -0.5908231735229492,
1281
+ "logps/chosen": -4.6682844161987305,
1282
+ "logps/rejected": -4.9077653884887695,
1283
+ "loss": 24.9016,
1284
+ "rewards/accuracies": 0.612500011920929,
1285
+ "rewards/chosen": -0.31124863028526306,
1286
+ "rewards/margins": 0.024502381682395935,
1287
+ "rewards/rejected": -0.3357509970664978,
1288
+ "step": 405
1289
+ },
1290
+ {
1291
+ "epoch": 0.3225806451612903,
1292
+ "grad_norm": 74.05709838867188,
1293
+ "learning_rate": 3.451599854189419e-05,
1294
+ "logits/chosen": 0.022719597443938255,
1295
+ "logits/rejected": -0.2066432684659958,
1296
+ "logps/chosen": -4.9405951499938965,
1297
+ "logps/rejected": -5.324969291687012,
1298
+ "loss": 24.7214,
1299
+ "rewards/accuracies": 0.574999988079071,
1300
+ "rewards/chosen": -0.37897494435310364,
1301
+ "rewards/margins": 0.023352503776550293,
1302
+ "rewards/rejected": -0.40232744812965393,
1303
+ "step": 410
1304
+ },
1305
+ {
1306
+ "epoch": 0.32651455546813535,
1307
+ "grad_norm": 39.29796600341797,
1308
+ "learning_rate": 3.4325559382045344e-05,
1309
+ "logits/chosen": 0.5940214395523071,
1310
+ "logits/rejected": 0.3333088755607605,
1311
+ "logps/chosen": -4.753328323364258,
1312
+ "logps/rejected": -5.081197738647461,
1313
+ "loss": 24.0772,
1314
+ "rewards/accuracies": 0.5249999761581421,
1315
+ "rewards/chosen": -0.36598071455955505,
1316
+ "rewards/margins": 0.0194702185690403,
1317
+ "rewards/rejected": -0.38545092940330505,
1318
+ "step": 415
1319
+ },
1320
+ {
1321
+ "epoch": 0.3304484657749803,
1322
+ "grad_norm": 77.69596862792969,
1323
+ "learning_rate": 3.413241469410669e-05,
1324
+ "logits/chosen": 0.5746585726737976,
1325
+ "logits/rejected": 0.37664318084716797,
1326
+ "logps/chosen": -4.366871356964111,
1327
+ "logps/rejected": -4.923527717590332,
1328
+ "loss": 23.106,
1329
+ "rewards/accuracies": 0.625,
1330
+ "rewards/chosen": -0.33017784357070923,
1331
+ "rewards/margins": 0.03850778192281723,
1332
+ "rewards/rejected": -0.36868563294410706,
1333
+ "step": 420
1334
+ },
1335
+ {
1336
+ "epoch": 0.33438237608182536,
1337
+ "grad_norm": 37.37318801879883,
1338
+ "learning_rate": 3.3936600955423684e-05,
1339
+ "logits/chosen": 0.5197581052780151,
1340
+ "logits/rejected": 0.22815366089344025,
1341
+ "logps/chosen": -4.27942419052124,
1342
+ "logps/rejected": -5.002130031585693,
1343
+ "loss": 21.5593,
1344
+ "rewards/accuracies": 0.574999988079071,
1345
+ "rewards/chosen": -0.3183726668357849,
1346
+ "rewards/margins": 0.0549759566783905,
1347
+ "rewards/rejected": -0.3733486235141754,
1348
+ "step": 425
1349
+ },
1350
+ {
1351
+ "epoch": 0.33831628638867034,
1352
+ "grad_norm": 51.467933654785156,
1353
+ "learning_rate": 3.373815514741928e-05,
1354
+ "logits/chosen": 0.5920094847679138,
1355
+ "logits/rejected": 0.27607864141464233,
1356
+ "logps/chosen": -4.812392234802246,
1357
+ "logps/rejected": -6.322897434234619,
1358
+ "loss": 20.3197,
1359
+ "rewards/accuracies": 0.6625000238418579,
1360
+ "rewards/chosen": -0.3940238356590271,
1361
+ "rewards/margins": 0.08363697677850723,
1362
+ "rewards/rejected": -0.47766080498695374,
1363
+ "step": 430
1364
+ },
1365
+ {
1366
+ "epoch": 0.3422501966955153,
1367
+ "grad_norm": 92.83573150634766,
1368
+ "learning_rate": 3.353711474860957e-05,
1369
+ "logits/chosen": 0.2608449459075928,
1370
+ "logits/rejected": 0.022218376398086548,
1371
+ "logps/chosen": -6.094309329986572,
1372
+ "logps/rejected": -6.749911308288574,
1373
+ "loss": 22.7358,
1374
+ "rewards/accuracies": 0.612500011920929,
1375
+ "rewards/chosen": -0.48861637711524963,
1376
+ "rewards/margins": 0.06393507868051529,
1377
+ "rewards/rejected": -0.5525515079498291,
1378
+ "step": 435
1379
+ },
1380
+ {
1381
+ "epoch": 0.34618410700236035,
1382
+ "grad_norm": 117.08031463623047,
1383
+ "learning_rate": 3.333351772752559e-05,
1384
+ "logits/chosen": 0.36764952540397644,
1385
+ "logits/rejected": 0.11572384834289551,
1386
+ "logps/chosen": -5.9948601722717285,
1387
+ "logps/rejected": -7.810961723327637,
1388
+ "loss": 22.5633,
1389
+ "rewards/accuracies": 0.675000011920929,
1390
+ "rewards/chosen": -0.4657825827598572,
1391
+ "rewards/margins": 0.0877792239189148,
1392
+ "rewards/rejected": -0.553561806678772,
1393
+ "step": 440
1394
+ },
1395
+ {
1396
+ "epoch": 0.35011801730920533,
1397
+ "grad_norm": 42.3343620300293,
1398
+ "learning_rate": 3.31274025355426e-05,
1399
+ "logits/chosen": -0.1417434960603714,
1400
+ "logits/rejected": -0.3869401216506958,
1401
+ "logps/chosen": -4.742400169372559,
1402
+ "logps/rejected": -5.590722560882568,
1403
+ "loss": 23.695,
1404
+ "rewards/accuracies": 0.7250000238418579,
1405
+ "rewards/chosen": -0.37819725275039673,
1406
+ "rewards/margins": 0.05572297424077988,
1407
+ "rewards/rejected": -0.4339202046394348,
1408
+ "step": 445
1409
+ },
1410
+ {
1411
+ "epoch": 0.35405192761605037,
1412
+ "grad_norm": 48.5923957824707,
1413
+ "learning_rate": 3.2918808099618145e-05,
1414
+ "logits/chosen": -0.13320264220237732,
1415
+ "logits/rejected": -0.4407239854335785,
1416
+ "logps/chosen": -4.052382946014404,
1417
+ "logps/rejected": -5.202944755554199,
1418
+ "loss": 23.9333,
1419
+ "rewards/accuracies": 0.637499988079071,
1420
+ "rewards/chosen": -0.2898481488227844,
1421
+ "rewards/margins": 0.06827215105295181,
1422
+ "rewards/rejected": -0.35812026262283325,
1423
+ "step": 450
1424
+ },
1425
+ {
1426
+ "epoch": 0.35798583792289534,
1427
+ "grad_norm": 40.526222229003906,
1428
+ "learning_rate": 3.270777381494025e-05,
1429
+ "logits/chosen": 0.15568742156028748,
1430
+ "logits/rejected": -0.1973900943994522,
1431
+ "logps/chosen": -3.3848178386688232,
1432
+ "logps/rejected": -4.244564056396484,
1433
+ "loss": 22.4256,
1434
+ "rewards/accuracies": 0.6625000238418579,
1435
+ "rewards/chosen": -0.24747803807258606,
1436
+ "rewards/margins": 0.0624859556555748,
1437
+ "rewards/rejected": -0.30996400117874146,
1438
+ "step": 455
1439
+ },
1440
+ {
1441
+ "epoch": 0.3619197482297404,
1442
+ "grad_norm": 39.41511917114258,
1443
+ "learning_rate": 3.2494339537487316e-05,
1444
+ "logits/chosen": 0.2709997296333313,
1445
+ "logits/rejected": -0.02324852906167507,
1446
+ "logps/chosen": -4.2176713943481445,
1447
+ "logps/rejected": -4.712052345275879,
1448
+ "loss": 22.1339,
1449
+ "rewards/accuracies": 0.675000011920929,
1450
+ "rewards/chosen": -0.30599862337112427,
1451
+ "rewards/margins": 0.05780113860964775,
1452
+ "rewards/rejected": -0.3637998104095459,
1453
+ "step": 460
1454
+ },
1455
+ {
1456
+ "epoch": 0.36585365853658536,
1457
+ "grad_norm": 64.34850311279297,
1458
+ "learning_rate": 3.227854557650086e-05,
1459
+ "logits/chosen": 0.43827542662620544,
1460
+ "logits/rejected": 0.3022671937942505,
1461
+ "logps/chosen": -4.146628379821777,
1462
+ "logps/rejected": -4.721283912658691,
1463
+ "loss": 24.9713,
1464
+ "rewards/accuracies": 0.625,
1465
+ "rewards/chosen": -0.32393088936805725,
1466
+ "rewards/margins": 0.03894919902086258,
1467
+ "rewards/rejected": -0.36288008093833923,
1468
+ "step": 465
1469
+ },
1470
+ {
1471
+ "epoch": 0.3697875688434304,
1472
+ "grad_norm": 48.07391357421875,
1473
+ "learning_rate": 3.206043268687271e-05,
1474
+ "logits/chosen": 0.8742543458938599,
1475
+ "logits/rejected": 0.6306554079055786,
1476
+ "logps/chosen": -4.235721111297607,
1477
+ "logps/rejected": -4.655104160308838,
1478
+ "loss": 24.504,
1479
+ "rewards/accuracies": 0.637499988079071,
1480
+ "rewards/chosen": -0.3277904689311981,
1481
+ "rewards/margins": 0.03070756234228611,
1482
+ "rewards/rejected": -0.3584980368614197,
1483
+ "step": 470
1484
+ },
1485
+ {
1486
+ "epoch": 0.37372147915027537,
1487
+ "grad_norm": 48.7944450378418,
1488
+ "learning_rate": 3.1840042061448034e-05,
1489
+ "logits/chosen": 0.8953019380569458,
1490
+ "logits/rejected": 0.6509414315223694,
1491
+ "logps/chosen": -3.809953212738037,
1492
+ "logps/rejected": -4.727410793304443,
1493
+ "loss": 21.5063,
1494
+ "rewards/accuracies": 0.612500011920929,
1495
+ "rewards/chosen": -0.2697634696960449,
1496
+ "rewards/margins": 0.04955270141363144,
1497
+ "rewards/rejected": -0.31931617856025696,
1498
+ "step": 475
1499
+ },
1500
+ {
1501
+ "epoch": 0.3776553894571204,
1502
+ "grad_norm": 50.050758361816406,
1503
+ "learning_rate": 3.161741532324567e-05,
1504
+ "logits/chosen": 0.6901669502258301,
1505
+ "logits/rejected": 0.4217056632041931,
1506
+ "logps/chosen": -3.9408392906188965,
1507
+ "logps/rejected": -4.815566062927246,
1508
+ "loss": 20.9824,
1509
+ "rewards/accuracies": 0.625,
1510
+ "rewards/chosen": -0.3004041314125061,
1511
+ "rewards/margins": 0.06339363753795624,
1512
+ "rewards/rejected": -0.36379775404930115,
1513
+ "step": 480
1514
+ },
1515
+ {
1516
+ "epoch": 0.3815892997639654,
1517
+ "grad_norm": 31.20033836364746,
1518
+ "learning_rate": 3.139259451759715e-05,
1519
+ "logits/chosen": 0.13000288605690002,
1520
+ "logits/rejected": -0.044496648013591766,
1521
+ "logps/chosen": -3.5587058067321777,
1522
+ "logps/rejected": -4.0875444412231445,
1523
+ "loss": 23.3392,
1524
+ "rewards/accuracies": 0.550000011920929,
1525
+ "rewards/chosen": -0.25211071968078613,
1526
+ "rewards/margins": 0.0408257320523262,
1527
+ "rewards/rejected": -0.2929364740848541,
1528
+ "step": 485
1529
+ },
1530
+ {
1531
+ "epoch": 0.38552321007081036,
1532
+ "grad_norm": 42.778175354003906,
1533
+ "learning_rate": 3.116562210420604e-05,
1534
+ "logits/chosen": 0.001088732504285872,
1535
+ "logits/rejected": -0.32067522406578064,
1536
+ "logps/chosen": -3.792126178741455,
1537
+ "logps/rejected": -5.263632297515869,
1538
+ "loss": 19.5225,
1539
+ "rewards/accuracies": 0.637499988079071,
1540
+ "rewards/chosen": -0.2830085754394531,
1541
+ "rewards/margins": 0.11108819395303726,
1542
+ "rewards/rejected": -0.3940967619419098,
1543
+ "step": 490
1544
+ },
1545
+ {
1546
+ "epoch": 0.3894571203776554,
1547
+ "grad_norm": 51.158023834228516,
1548
+ "learning_rate": 3.093654094912901e-05,
1549
+ "logits/chosen": 0.14770345389842987,
1550
+ "logits/rejected": -0.27998632192611694,
1551
+ "logps/chosen": -3.2016499042510986,
1552
+ "logps/rejected": -4.281766414642334,
1553
+ "loss": 20.9501,
1554
+ "rewards/accuracies": 0.762499988079071,
1555
+ "rewards/chosen": -0.23279622197151184,
1556
+ "rewards/margins": 0.07809507101774216,
1557
+ "rewards/rejected": -0.3108913004398346,
1558
+ "step": 495
1559
+ },
1560
+ {
1561
+ "epoch": 0.3933910306845004,
1562
+ "grad_norm": 56.20425033569336,
1563
+ "learning_rate": 3.070539431668008e-05,
1564
+ "logits/chosen": 0.262935608625412,
1565
+ "logits/rejected": 0.04751387611031532,
1566
+ "logps/chosen": -3.5432257652282715,
1567
+ "logps/rejected": -4.892638683319092,
1568
+ "loss": 19.8621,
1569
+ "rewards/accuracies": 0.637499988079071,
1570
+ "rewards/chosen": -0.2724161148071289,
1571
+ "rewards/margins": 0.10621275752782822,
1572
+ "rewards/rejected": -0.3786288797855377,
1573
+ "step": 500
1574
+ },
1575
+ {
1576
+ "epoch": 0.3933910306845004,
1577
+ "eval_logits/chosen": 0.6740007400512695,
1578
+ "eval_logits/rejected": 0.4591088891029358,
1579
+ "eval_logps/chosen": -4.5571136474609375,
1580
+ "eval_logps/rejected": -5.613867282867432,
1581
+ "eval_loss": 21.918312072753906,
1582
+ "eval_rewards/accuracies": 0.659375011920929,
1583
+ "eval_rewards/chosen": -0.35319122672080994,
1584
+ "eval_rewards/margins": 0.08364833891391754,
1585
+ "eval_rewards/rejected": -0.43683958053588867,
1586
+ "eval_runtime": 263.6435,
1587
+ "eval_samples_per_second": 2.428,
1588
+ "eval_steps_per_second": 0.152,
1589
+ "step": 500
1590
+ },
1591
+ {
1592
+ "epoch": 0.3973249409913454,
1593
+ "grad_norm": 58.8707275390625,
1594
+ "learning_rate": 3.0472225861259792e-05,
1595
+ "logits/chosen": 0.5369864702224731,
1596
+ "logits/rejected": 0.22990770637989044,
1597
+ "logps/chosen": -4.736105442047119,
1598
+ "logps/rejected": -6.215359687805176,
1599
+ "loss": 19.8037,
1600
+ "rewards/accuracies": 0.737500011920929,
1601
+ "rewards/chosen": -0.3894490599632263,
1602
+ "rewards/margins": 0.10317282378673553,
1603
+ "rewards/rejected": -0.49262189865112305,
1604
+ "step": 505
1605
+ },
1606
+ {
1607
+ "epoch": 0.4012588512981904,
1608
+ "grad_norm": 70.14684295654297,
1609
+ "learning_rate": 3.023707961911056e-05,
1610
+ "logits/chosen": 0.8286817669868469,
1611
+ "logits/rejected": 0.553676187992096,
1612
+ "logps/chosen": -5.788529872894287,
1613
+ "logps/rejected": -7.536166191101074,
1614
+ "loss": 17.8624,
1615
+ "rewards/accuracies": 0.737500011920929,
1616
+ "rewards/chosen": -0.49081355333328247,
1617
+ "rewards/margins": 0.14760908484458923,
1618
+ "rewards/rejected": -0.6384226083755493,
1619
+ "step": 510
1620
+ },
1621
+ {
1622
+ "epoch": 0.4051927616050354,
1623
+ "grad_norm": 64.01305389404297,
1624
+ "learning_rate": 3.0000000000000004e-05,
1625
+ "logits/chosen": 0.9336326718330383,
1626
+ "logits/rejected": 0.7778112292289734,
1627
+ "logps/chosen": -6.726840972900391,
1628
+ "logps/rejected": -7.550817966461182,
1629
+ "loss": 22.4924,
1630
+ "rewards/accuracies": 0.6000000238418579,
1631
+ "rewards/chosen": -0.57944256067276,
1632
+ "rewards/margins": 0.06806603819131851,
1633
+ "rewards/rejected": -0.6475085616111755,
1634
+ "step": 515
1635
+ },
1636
+ {
1637
+ "epoch": 0.4091266719118804,
1638
+ "grad_norm": 78.60610961914062,
1639
+ "learning_rate": 2.976103177883374e-05,
1640
+ "logits/chosen": 1.424285650253296,
1641
+ "logits/rejected": 1.2528765201568604,
1642
+ "logps/chosen": -6.3146467208862305,
1643
+ "logps/rejected": -7.42047643661499,
1644
+ "loss": 22.4632,
1645
+ "rewards/accuracies": 0.6625000238418579,
1646
+ "rewards/chosen": -0.5491231083869934,
1647
+ "rewards/margins": 0.08809302002191544,
1648
+ "rewards/rejected": -0.6372160911560059,
1649
+ "step": 520
1650
+ },
1651
+ {
1652
+ "epoch": 0.41306058221872544,
1653
+ "grad_norm": 53.94759750366211,
1654
+ "learning_rate": 2.9520220087199142e-05,
1655
+ "logits/chosen": 2.070854663848877,
1656
+ "logits/rejected": 1.9312273263931274,
1657
+ "logps/chosen": -7.275876522064209,
1658
+ "logps/rejected": -7.790124416351318,
1659
+ "loss": 22.9161,
1660
+ "rewards/accuracies": 0.625,
1661
+ "rewards/chosen": -0.6180551052093506,
1662
+ "rewards/margins": 0.04511018842458725,
1663
+ "rewards/rejected": -0.663165271282196,
1664
+ "step": 525
1665
+ },
1666
+ {
1667
+ "epoch": 0.4169944925255704,
1668
+ "grad_norm": 56.0859489440918,
1669
+ "learning_rate": 2.9277610404841792e-05,
1670
+ "logits/chosen": 2.1373679637908936,
1671
+ "logits/rejected": 1.8990551233291626,
1672
+ "logps/chosen": -6.323026180267334,
1673
+ "logps/rejected": -7.068973541259766,
1674
+ "loss": 21.5577,
1675
+ "rewards/accuracies": 0.6625000238418579,
1676
+ "rewards/chosen": -0.5458236932754517,
1677
+ "rewards/margins": 0.058930903673172,
1678
+ "rewards/rejected": -0.6047546863555908,
1679
+ "step": 530
1680
+ },
1681
+ {
1682
+ "epoch": 0.4209284028324154,
1683
+ "grad_norm": 38.24552536010742,
1684
+ "learning_rate": 2.903324855107617e-05,
1685
+ "logits/chosen": 1.698553442955017,
1686
+ "logits/rejected": 1.4752168655395508,
1687
+ "logps/chosen": -6.036180019378662,
1688
+ "logps/rejected": -7.159039497375488,
1689
+ "loss": 20.6918,
1690
+ "rewards/accuracies": 0.6499999761581421,
1691
+ "rewards/chosen": -0.4921696186065674,
1692
+ "rewards/margins": 0.09151138365268707,
1693
+ "rewards/rejected": -0.5836809873580933,
1694
+ "step": 535
1695
+ },
1696
+ {
1697
+ "epoch": 0.42486231313926043,
1698
+ "grad_norm": 78.7889633178711,
1699
+ "learning_rate": 2.8787180676132222e-05,
1700
+ "logits/chosen": 1.3787410259246826,
1701
+ "logits/rejected": 1.1702592372894287,
1702
+ "logps/chosen": -5.483891010284424,
1703
+ "logps/rejected": -6.863039493560791,
1704
+ "loss": 21.8569,
1705
+ "rewards/accuracies": 0.675000011920929,
1706
+ "rewards/chosen": -0.4555833339691162,
1707
+ "rewards/margins": 0.10881421715021133,
1708
+ "rewards/rejected": -0.5643975138664246,
1709
+ "step": 540
1710
+ },
1711
+ {
1712
+ "epoch": 0.4287962234461054,
1713
+ "grad_norm": 58.13717269897461,
1714
+ "learning_rate": 2.8539453252439388e-05,
1715
+ "logits/chosen": 1.238527536392212,
1716
+ "logits/rejected": 1.0659363269805908,
1717
+ "logps/chosen": -4.524688243865967,
1718
+ "logps/rejected": -5.795032978057861,
1719
+ "loss": 19.8882,
1720
+ "rewards/accuracies": 0.6625000238418579,
1721
+ "rewards/chosen": -0.365116685628891,
1722
+ "rewards/margins": 0.0977514460682869,
1723
+ "rewards/rejected": -0.4628681540489197,
1724
+ "step": 545
1725
+ },
1726
+ {
1727
+ "epoch": 0.43273013375295044,
1728
+ "grad_norm": 57.31916809082031,
1729
+ "learning_rate": 2.829011306584983e-05,
1730
+ "logits/chosen": 1.0495655536651611,
1731
+ "logits/rejected": 0.9189395904541016,
1732
+ "logps/chosen": -4.698520183563232,
1733
+ "logps/rejected": -5.4618024826049805,
1734
+ "loss": 22.771,
1735
+ "rewards/accuracies": 0.6875,
1736
+ "rewards/chosen": -0.3808877170085907,
1737
+ "rewards/margins": 0.05942262336611748,
1738
+ "rewards/rejected": -0.4403103291988373,
1739
+ "step": 550
1740
+ },
1741
+ {
1742
+ "epoch": 0.4366640440597954,
1743
+ "grad_norm": 57.70218276977539,
1744
+ "learning_rate": 2.8039207206802444e-05,
1745
+ "logits/chosen": 1.0372337102890015,
1746
+ "logits/rejected": 0.7637672424316406,
1747
+ "logps/chosen": -5.192538261413574,
1748
+ "logps/rejected": -6.2920708656311035,
1749
+ "loss": 21.0183,
1750
+ "rewards/accuracies": 0.699999988079071,
1751
+ "rewards/chosen": -0.4280625283718109,
1752
+ "rewards/margins": 0.07608579099178314,
1753
+ "rewards/rejected": -0.5041483640670776,
1754
+ "step": 555
1755
+ },
1756
+ {
1757
+ "epoch": 0.44059795436664045,
1758
+ "grad_norm": 38.57474899291992,
1759
+ "learning_rate": 2.778678306142936e-05,
1760
+ "logits/chosen": 1.3346863985061646,
1761
+ "logits/rejected": 1.2482701539993286,
1762
+ "logps/chosen": -4.537243843078613,
1763
+ "logps/rejected": -5.363685607910156,
1764
+ "loss": 21.725,
1765
+ "rewards/accuracies": 0.6625000238418579,
1766
+ "rewards/chosen": -0.367723673582077,
1767
+ "rewards/margins": 0.07422361522912979,
1768
+ "rewards/rejected": -0.4419472813606262,
1769
+ "step": 560
1770
+ },
1771
+ {
1772
+ "epoch": 0.44453186467348543,
1773
+ "grad_norm": 47.30175018310547,
1774
+ "learning_rate": 2.753288830260655e-05,
1775
+ "logits/chosen": 1.081312894821167,
1776
+ "logits/rejected": 1.0209182500839233,
1777
+ "logps/chosen": -4.504608154296875,
1778
+ "logps/rejected": -5.094980716705322,
1779
+ "loss": 23.979,
1780
+ "rewards/accuracies": 0.5625,
1781
+ "rewards/chosen": -0.34262794256210327,
1782
+ "rewards/margins": 0.04645577073097229,
1783
+ "rewards/rejected": -0.38908374309539795,
1784
+ "step": 565
1785
+ },
1786
+ {
1787
+ "epoch": 0.44846577498033047,
1788
+ "grad_norm": 41.14936065673828,
1789
+ "learning_rate": 2.727757088095037e-05,
1790
+ "logits/chosen": 1.2744576930999756,
1791
+ "logits/rejected": 0.9736446142196655,
1792
+ "logps/chosen": -4.100034236907959,
1793
+ "logps/rejected": -5.128809452056885,
1794
+ "loss": 20.4892,
1795
+ "rewards/accuracies": 0.675000011920929,
1796
+ "rewards/chosen": -0.3064883053302765,
1797
+ "rewards/margins": 0.08413257449865341,
1798
+ "rewards/rejected": -0.3906208872795105,
1799
+ "step": 570
1800
+ },
1801
+ {
1802
+ "epoch": 0.45239968528717545,
1803
+ "grad_norm": 29.25827407836914,
1804
+ "learning_rate": 2.7020879015761555e-05,
1805
+ "logits/chosen": 1.315836787223816,
1806
+ "logits/rejected": 1.091429352760315,
1807
+ "logps/chosen": -4.02718448638916,
1808
+ "logps/rejected": -4.86886739730835,
1809
+ "loss": 21.384,
1810
+ "rewards/accuracies": 0.6499999761581421,
1811
+ "rewards/chosen": -0.2975132465362549,
1812
+ "rewards/margins": 0.07079926878213882,
1813
+ "rewards/rejected": -0.3683125078678131,
1814
+ "step": 575
1815
+ },
1816
+ {
1817
+ "epoch": 0.4563335955940205,
1818
+ "grad_norm": 156.1727752685547,
1819
+ "learning_rate": 2.6762861185918532e-05,
1820
+ "logits/chosen": 1.271761178970337,
1821
+ "logits/rejected": 1.0751326084136963,
1822
+ "logps/chosen": -4.122300624847412,
1823
+ "logps/rejected": -5.07947301864624,
1824
+ "loss": 20.2656,
1825
+ "rewards/accuracies": 0.6875,
1826
+ "rewards/chosen": -0.3042159378528595,
1827
+ "rewards/margins": 0.0826338604092598,
1828
+ "rewards/rejected": -0.3868497610092163,
1829
+ "step": 580
1830
+ },
1831
+ {
1832
+ "epoch": 0.46026750590086546,
1833
+ "grad_norm": 33.745567321777344,
1834
+ "learning_rate": 2.6503566120721685e-05,
1835
+ "logits/chosen": 1.1284042596817017,
1836
+ "logits/rejected": 0.8111549615859985,
1837
+ "logps/chosen": -4.277104377746582,
1838
+ "logps/rejected": -5.083390235900879,
1839
+ "loss": 20.6651,
1840
+ "rewards/accuracies": 0.6499999761581421,
1841
+ "rewards/chosen": -0.3438721299171448,
1842
+ "rewards/margins": 0.06679800897836685,
1843
+ "rewards/rejected": -0.41067013144493103,
1844
+ "step": 585
1845
+ },
1846
+ {
1847
+ "epoch": 0.46420141620771044,
1848
+ "grad_norm": 51.66657638549805,
1849
+ "learning_rate": 2.6243042790690332e-05,
1850
+ "logits/chosen": 0.8191879987716675,
1851
+ "logits/rejected": 0.7653782367706299,
1852
+ "logps/chosen": -5.171725273132324,
1853
+ "logps/rejected": -5.932669162750244,
1854
+ "loss": 22.7086,
1855
+ "rewards/accuracies": 0.6000000238418579,
1856
+ "rewards/chosen": -0.4288380742073059,
1857
+ "rewards/margins": 0.0465051606297493,
1858
+ "rewards/rejected": -0.4753432869911194,
1859
+ "step": 590
1860
+ },
1861
+ {
1862
+ "epoch": 0.46813532651455547,
1863
+ "grad_norm": 44.73387908935547,
1864
+ "learning_rate": 2.5981340398314148e-05,
1865
+ "logits/chosen": 0.5237664580345154,
1866
+ "logits/rejected": 0.2646290957927704,
1867
+ "logps/chosen": -4.674435615539551,
1868
+ "logps/rejected": -6.298100471496582,
1869
+ "loss": 19.2422,
1870
+ "rewards/accuracies": 0.7124999761581421,
1871
+ "rewards/chosen": -0.3852913975715637,
1872
+ "rewards/margins": 0.1304633468389511,
1873
+ "rewards/rejected": -0.5157546997070312,
1874
+ "step": 595
1875
+ },
1876
+ {
1877
+ "epoch": 0.47206923682140045,
1878
+ "grad_norm": 45.94012451171875,
1879
+ "learning_rate": 2.571850836876074e-05,
1880
+ "logits/chosen": 0.5761805176734924,
1881
+ "logits/rejected": 0.38747546076774597,
1882
+ "logps/chosen": -5.026305198669434,
1883
+ "logps/rejected": -7.055234432220459,
1884
+ "loss": 19.6134,
1885
+ "rewards/accuracies": 0.6875,
1886
+ "rewards/chosen": -0.4065484404563904,
1887
+ "rewards/margins": 0.10743912309408188,
1888
+ "rewards/rejected": -0.5139876008033752,
1889
+ "step": 600
1890
+ },
1891
+ {
1892
+ "epoch": 0.47206923682140045,
1893
+ "eval_logits/chosen": 0.718625545501709,
1894
+ "eval_logits/rejected": 0.5276994705200195,
1895
+ "eval_logps/chosen": -5.122862815856934,
1896
+ "eval_logps/rejected": -6.157461643218994,
1897
+ "eval_loss": 21.601177215576172,
1898
+ "eval_rewards/accuracies": 0.6812499761581421,
1899
+ "eval_rewards/chosen": -0.40976619720458984,
1900
+ "eval_rewards/margins": 0.08143284171819687,
1901
+ "eval_rewards/rejected": -0.4911990761756897,
1902
+ "eval_runtime": 265.5044,
1903
+ "eval_samples_per_second": 2.411,
1904
+ "eval_steps_per_second": 0.151,
1905
+ "step": 600
1906
+ }
1907
+ ],
1908
+ "logging_steps": 5,
1909
+ "max_steps": 1271,
1910
+ "num_input_tokens_seen": 0,
1911
+ "num_train_epochs": 1,
1912
+ "save_steps": 10,
1913
+ "stateful_callbacks": {
1914
+ "TrainerControl": {
1915
+ "args": {
1916
+ "should_epoch_stop": false,
1917
+ "should_evaluate": false,
1918
+ "should_log": false,
1919
+ "should_save": true,
1920
+ "should_training_stop": false
1921
+ },
1922
+ "attributes": {}
1923
+ }
1924
+ },
1925
+ "total_flos": 0.0,
1926
+ "train_batch_size": 16,
1927
+ "trial_name": null,
1928
+ "trial_params": null
1929
+ }
checkpoint-600/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a47238caed23ea3736d7d1dfe9c79a22480ebe8f79366e83b467104c5d80f470
3
+ size 5688
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": true,
26
+ "single_word": false,
27
+ "special": false
28
+ },
29
+ "32000": {
30
+ "content": "<|endoftext|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "<|assistant|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": true,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32002": {
46
+ "content": "<|placeholder1|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": true,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32003": {
54
+ "content": "<|placeholder2|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": true,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "32004": {
62
+ "content": "<|placeholder3|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": true,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "32005": {
70
+ "content": "<|placeholder4|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": true,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "32006": {
78
+ "content": "<|system|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": true,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "32007": {
86
+ "content": "<|end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": true,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "32008": {
94
+ "content": "<|placeholder5|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": true,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "32009": {
102
+ "content": "<|placeholder6|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": true,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "32010": {
110
+ "content": "<|user|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": true,
114
+ "single_word": false,
115
+ "special": true
116
+ }
117
+ },
118
+ "bos_token": "<s>",
119
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|user|>' + '\n' + message['content'] + '<|end|>' + '\n' + '<|assistant|>' + '\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|end|>' + '\n'}}{% endif %}{% endfor %}",
120
+ "clean_up_tokenization_spaces": false,
121
+ "eos_token": "<|endoftext|>",
122
+ "legacy": false,
123
+ "model_max_length": 4096,
124
+ "pad_token": "<|endoftext|>",
125
+ "padding_side": "right",
126
+ "sp_model_kwargs": {},
127
+ "tokenizer_class": "LlamaTokenizer",
128
+ "unk_token": "<unk>",
129
+ "use_default_system_prompt": false
130
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 21.99049959550403,
4
+ "train_runtime": 22427.3619,
5
+ "train_samples_per_second": 0.906,
6
+ "train_steps_per_second": 0.057
7
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a47238caed23ea3736d7d1dfe9c79a22480ebe8f79366e83b467104c5d80f470
3
+ size 5688