Model save
Browse files- README.md +28 -39
- all_results.json +7 -19
- generation_config.json +1 -1
- model.safetensors +1 -1
- runs/Jul08_06-23-29_42dbe5cf9ed4/events.out.tfevents.1720420363.42dbe5cf9ed4.848884.0 +2 -2
- train_results.json +7 -6
- trainer_state.json +0 -0
README.md
CHANGED
@@ -1,4 +1,6 @@
|
|
1 |
---
|
|
|
|
|
2 |
tags:
|
3 |
- trl
|
4 |
- dpo
|
@@ -13,17 +15,18 @@ should probably proofread and complete it, then remove this comment. -->
|
|
13 |
|
14 |
# pythia-1.4b-dpo-full
|
15 |
|
16 |
-
This model
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 0.
|
19 |
-
- Rewards/chosen: 0.
|
20 |
-
- Rewards/rejected: 0.
|
21 |
-
- Rewards/accuracies: 0.
|
22 |
-
- Rewards/margins: 0.
|
23 |
-
- Logps/rejected: -
|
24 |
-
- Logps/chosen: -
|
25 |
-
- Logits/rejected: -
|
26 |
-
- Logits/chosen: -
|
|
|
27 |
|
28 |
## Model description
|
29 |
|
@@ -44,12 +47,13 @@ More information needed
|
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 5e-07
|
46 |
- train_batch_size: 5
|
47 |
-
- eval_batch_size:
|
48 |
- seed: 42
|
49 |
- distributed_type: multi-GPU
|
50 |
- num_devices: 6
|
51 |
-
-
|
52 |
-
-
|
|
|
53 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
54 |
- lr_scheduler_type: cosine
|
55 |
- lr_scheduler_warmup_ratio: 0.1
|
@@ -57,33 +61,18 @@ The following hyperparameters were used during training:
|
|
57 |
|
58 |
### Training results
|
59 |
|
60 |
-
| Training Loss | Epoch
|
61 |
-
|
62 |
-
| 0.
|
63 |
-
| 0.
|
64 |
-
| 0.
|
65 |
-
| 0.
|
66 |
-
| 0.
|
67 |
-
| 0.686 | 0.29 | 600 | 0.6446 | 0.5781 | 0.4180 | 0.5714 | 0.1592 | -2024.0 | -2320.0 | -0.7188 | -0.6602 |
|
68 |
-
| 0.6459 | 0.34 | 700 | 0.6449 | 0.5508 | 0.3633 | 0.6012 | 0.1885 | -2032.0 | -2320.0 | -0.6875 | -0.6289 |
|
69 |
-
| 0.6458 | 0.39 | 800 | 0.6421 | 0.5586 | 0.3867 | 0.5774 | 0.1709 | -2024.0 | -2320.0 | -0.6953 | -0.6406 |
|
70 |
-
| 0.6451 | 0.44 | 900 | 0.6398 | 0.7109 | 0.5039 | 0.5685 | 0.2070 | -2016.0 | -2304.0 | -0.6719 | -0.6133 |
|
71 |
-
| 0.6213 | 0.49 | 1000 | 0.6407 | 0.7734 | 0.5742 | 0.5714 | 0.2012 | -2008.0 | -2304.0 | -0.6602 | -0.6016 |
|
72 |
-
| 0.6313 | 0.54 | 1100 | 0.6387 | 0.5391 | 0.3555 | 0.5893 | 0.1807 | -2032.0 | -2320.0 | -0.6680 | -0.6094 |
|
73 |
-
| 0.6298 | 0.59 | 1200 | 0.6380 | 0.6953 | 0.4922 | 0.6042 | 0.2031 | -2016.0 | -2304.0 | -0.6523 | -0.5977 |
|
74 |
-
| 0.6461 | 0.64 | 1300 | 0.6396 | 0.5586 | 0.3613 | 0.5863 | 0.1963 | -2032.0 | -2320.0 | -0.6914 | -0.6367 |
|
75 |
-
| 0.6258 | 0.69 | 1400 | 0.6360 | 0.6914 | 0.4727 | 0.5923 | 0.2207 | -2016.0 | -2304.0 | -0.6758 | -0.6172 |
|
76 |
-
| 0.6347 | 0.74 | 1500 | 0.6375 | 0.625 | 0.4141 | 0.5893 | 0.2100 | -2024.0 | -2320.0 | -0.6641 | -0.6094 |
|
77 |
-
| 0.6185 | 0.79 | 1600 | 0.6382 | 0.5977 | 0.3926 | 0.6042 | 0.2051 | -2032.0 | -2320.0 | -0.6797 | -0.625 |
|
78 |
-
| 0.6408 | 0.83 | 1700 | 0.6374 | 0.5977 | 0.3926 | 0.5952 | 0.2041 | -2024.0 | -2320.0 | -0.6719 | -0.6172 |
|
79 |
-
| 0.662 | 0.88 | 1800 | 0.6355 | 0.6094 | 0.3984 | 0.6012 | 0.2119 | -2024.0 | -2320.0 | -0.6836 | -0.6289 |
|
80 |
-
| 0.6385 | 0.93 | 1900 | 0.6379 | 0.6055 | 0.3926 | 0.625 | 0.2129 | -2024.0 | -2320.0 | -0.6758 | -0.6211 |
|
81 |
-
| 0.6154 | 0.98 | 2000 | 0.6381 | 0.6094 | 0.4043 | 0.6012 | 0.2041 | -2024.0 | -2320.0 | -0.6758 | -0.6211 |
|
82 |
|
83 |
|
84 |
### Framework versions
|
85 |
|
86 |
-
- Transformers 4.
|
87 |
-
- Pytorch 2.2.
|
88 |
-
- Datasets 2.
|
89 |
-
- Tokenizers 0.
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: nnheui/pythia-1.4b-sft-full
|
4 |
tags:
|
5 |
- trl
|
6 |
- dpo
|
|
|
15 |
|
16 |
# pythia-1.4b-dpo-full
|
17 |
|
18 |
+
This model is a fine-tuned version of [nnheui/pythia-1.4b-sft-full](https://huggingface.co/nnheui/pythia-1.4b-sft-full) on an unknown dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 0.6259
|
21 |
+
- Rewards/chosen: -0.5234
|
22 |
+
- Rewards/rejected: -0.7852
|
23 |
+
- Rewards/accuracies: 0.6567
|
24 |
+
- Rewards/margins: 0.2617
|
25 |
+
- Logps/rejected: -416.0
|
26 |
+
- Logps/chosen: -446.0
|
27 |
+
- Logits/rejected: -1.2422
|
28 |
+
- Logits/chosen: -1.1953
|
29 |
+
- Logps/bottom Tokens: -0.0007
|
30 |
|
31 |
## Model description
|
32 |
|
|
|
47 |
The following hyperparameters were used during training:
|
48 |
- learning_rate: 5e-07
|
49 |
- train_batch_size: 5
|
50 |
+
- eval_batch_size: 5
|
51 |
- seed: 42
|
52 |
- distributed_type: multi-GPU
|
53 |
- num_devices: 6
|
54 |
+
- gradient_accumulation_steps: 4
|
55 |
+
- total_train_batch_size: 120
|
56 |
+
- total_eval_batch_size: 30
|
57 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
58 |
- lr_scheduler_type: cosine
|
59 |
- lr_scheduler_warmup_ratio: 0.1
|
|
|
61 |
|
62 |
### Training results
|
63 |
|
64 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Logps/bottom Tokens |
|
65 |
+
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:-------------------:|
|
66 |
+
| 0.678 | 0.1963 | 100 | 0.6789 | -0.0275 | -0.0608 | 0.5881 | 0.0332 | -344.0 | -396.0 | -1.1562 | -1.0938 | -0.0009 |
|
67 |
+
| 0.645 | 0.3925 | 200 | 0.6489 | -0.2871 | -0.4238 | 0.6448 | 0.1367 | -380.0 | -422.0 | -1.2031 | -1.1562 | -0.0009 |
|
68 |
+
| 0.6396 | 0.5888 | 300 | 0.6304 | -0.4512 | -0.6797 | 0.6627 | 0.2275 | -406.0 | -438.0 | -1.2344 | -1.1875 | -0.0008 |
|
69 |
+
| 0.6102 | 0.7851 | 400 | 0.6268 | -0.5039 | -0.7617 | 0.6567 | 0.2578 | -414.0 | -444.0 | -1.2344 | -1.1875 | -0.0007 |
|
70 |
+
| 0.6084 | 0.9814 | 500 | 0.6259 | -0.5234 | -0.7852 | 0.6567 | 0.2617 | -416.0 | -446.0 | -1.2422 | -1.1953 | -0.0007 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
|
73 |
### Framework versions
|
74 |
|
75 |
+
- Transformers 4.40.0
|
76 |
+
- Pytorch 2.2.2+cu121
|
77 |
+
- Datasets 2.19.0
|
78 |
+
- Tokenizers 0.19.1
|
all_results.json
CHANGED
@@ -1,21 +1,9 @@
|
|
1 |
{
|
2 |
-
"epoch":
|
3 |
-
"
|
4 |
-
"
|
5 |
-
"
|
6 |
-
"
|
7 |
-
"
|
8 |
-
"
|
9 |
-
"eval_rewards/chosen": 0.609375,
|
10 |
-
"eval_rewards/margins": 0.2001953125,
|
11 |
-
"eval_rewards/rejected": 0.41015625,
|
12 |
-
"eval_runtime": 85.4736,
|
13 |
-
"eval_samples": 2000,
|
14 |
-
"eval_samples_per_second": 23.399,
|
15 |
-
"eval_steps_per_second": 0.491,
|
16 |
-
"train_loss": 0.6502101503246783,
|
17 |
-
"train_runtime": 8979.6364,
|
18 |
-
"train_samples": 61135,
|
19 |
-
"train_samples_per_second": 6.808,
|
20 |
-
"train_steps_per_second": 0.227
|
21 |
}
|
|
|
1 |
{
|
2 |
+
"epoch": 0.9990186457311089,
|
3 |
+
"total_flos": 0.0,
|
4 |
+
"train_loss": 0.6464882252961105,
|
5 |
+
"train_runtime": 7224.3746,
|
6 |
+
"train_samples": 61134,
|
7 |
+
"train_samples_per_second": 8.462,
|
8 |
+
"train_steps_per_second": 0.07
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
}
|
generation_config.json
CHANGED
@@ -2,6 +2,6 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 0,
|
4 |
"eos_token_id": 0,
|
5 |
-
"transformers_version": "4.
|
6 |
"use_cache": false
|
7 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 0,
|
4 |
"eos_token_id": 0,
|
5 |
+
"transformers_version": "4.40.0",
|
6 |
"use_cache": false
|
7 |
}
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 2829330208
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dee645992f24ee02b486f2e81b344b7a98df284d1f79aa4f2f1679fdd185f99d
|
3 |
size 2829330208
|
runs/Jul08_06-23-29_42dbe5cf9ed4/events.out.tfevents.1720420363.42dbe5cf9ed4.848884.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e1e67859856cf3694e01f54ca2dccfaa3c3cd4bdb09f64aeaccc2bae9c86f7ec
|
3 |
+
size 47528
|
train_results.json
CHANGED
@@ -1,8 +1,9 @@
|
|
1 |
{
|
2 |
-
"epoch":
|
3 |
-
"
|
4 |
-
"
|
5 |
-
"
|
6 |
-
"
|
7 |
-
"
|
|
|
8 |
}
|
|
|
1 |
{
|
2 |
+
"epoch": 0.9990186457311089,
|
3 |
+
"total_flos": 0.0,
|
4 |
+
"train_loss": 0.6464882252961105,
|
5 |
+
"train_runtime": 7224.3746,
|
6 |
+
"train_samples": 61134,
|
7 |
+
"train_samples_per_second": 8.462,
|
8 |
+
"train_steps_per_second": 0.07
|
9 |
}
|
trainer_state.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|