gweltou commited on
Commit
df57601
1 Parent(s): 39beba3

End of training

Browse files
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
- base_model: distilbert/distilgpt2
6
  model-index:
7
  - name: tiny-gpt2-br
8
  results: []
@@ -13,9 +13,14 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # tiny-gpt2-br
15
 
16
- This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 3.6213
 
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -34,39 +39,14 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 0.001
38
- - train_batch_size: 16
39
- - eval_batch_size: 32
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
- - training_steps: 19000
45
-
46
- ### Training results
47
-
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:-----:|:---------------:|
50
- | 5.7274 | 0.21 | 1000 | 4.9308 |
51
- | 4.7191 | 0.42 | 2000 | 4.5347 |
52
- | 4.4579 | 0.63 | 3000 | 4.3432 |
53
- | 4.2769 | 0.84 | 4000 | 4.1893 |
54
- | 4.1086 | 1.05 | 5000 | 4.0861 |
55
- | 3.9327 | 1.26 | 6000 | 3.9992 |
56
- | 3.8812 | 1.47 | 7000 | 3.9216 |
57
- | 3.8298 | 1.68 | 8000 | 3.8648 |
58
- | 3.7785 | 1.89 | 9000 | 3.8126 |
59
- | 3.6099 | 2.1 | 10000 | 3.7931 |
60
- | 3.471 | 2.31 | 11000 | 3.7539 |
61
- | 3.4651 | 2.52 | 12000 | 3.7141 |
62
- | 3.4451 | 2.73 | 13000 | 3.6754 |
63
- | 3.4251 | 2.94 | 14000 | 3.6327 |
64
- | 3.1855 | 3.15 | 15000 | 3.6779 |
65
- | 3.0962 | 3.36 | 16000 | 3.6757 |
66
- | 3.0971 | 3.56 | 17000 | 3.6437 |
67
- | 3.0816 | 3.77 | 18000 | 3.6287 |
68
- | 3.0582 | 3.98 | 19000 | 3.6213 |
69
-
70
 
71
  ### Framework versions
72
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: distilbert/distilgpt2
4
  tags:
5
  - generated_from_trainer
 
6
  model-index:
7
  - name: tiny-gpt2-br
8
  results: []
 
13
 
14
  # tiny-gpt2-br
15
 
16
+ This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - eval_loss: 3.2672
19
+ - eval_runtime: 134.3513
20
+ - eval_samples_per_second: 259.023
21
+ - eval_steps_per_second: 16.189
22
+ - epoch: 3.42
23
+ - step: 134000
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 0.0008
43
+ - train_batch_size: 8
44
+ - eval_batch_size: 16
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
+ - num_epochs: 4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ### Framework versions
52
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46dcb6f9966016e71724c39b390c540dd2d8fd77f4bd9626dd290fb42b8fcf66
3
  size 327657928
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6721d43c325291c48544c3a1e5ae02183dd14da471bef6ba21b6384c03a7eac
3
  size 327657928
runs/Apr01_18-06-47_gweltaz-NUC10i7FNK/events.out.tfevents.1711987611.gweltaz-NUC10i7FNK CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9f1a904bb874ab823568ffc458dafb0b8a520d84cbb016c88b959f3adeef87b
3
- size 7029
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5567e05cfda04d34326806bbdf4513f4371bbb30357fe3e7c72a00d17b0d1a45
3
+ size 7300
runs/Apr01_18-54-07_gweltaz-NUC10i7FNK/events.out.tfevents.1711990451.gweltaz-NUC10i7FNK ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c5990c73a5a4ecbb8f49db6f31581480c903bf193430f8cc698d7f8dabddead
3
+ size 70540
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:39a8d6f02fb3929cbe6520c32e9bb74308e8f5c45da0791a2253fb8abae27908
3
  size 4475
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b0acbee13ddf623aa2f948dcc2d177e96c8edd64a2eed6cdc5776481bd85c9e
3
  size 4475