rb05751 commited on
Commit
24de692
1 Parent(s): 216b591

End of training

Browse files
Files changed (3) hide show
  1. README.md +10 -9
  2. generation_config.json +1 -1
  3. pytorch_model.bin +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- license: apache-2.0
3
- base_model: distilgpt2
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,9 +13,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # my_finetuned_gpt2_model
15
 
16
- This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 3.7574
19
 
20
  ## Model description
21
 
@@ -34,24 +34,25 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 2e-05
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 1
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
- | 3.8166 | 1.0 | 1126 | 3.7574 |
 
50
 
51
 
52
  ### Framework versions
53
 
54
- - Transformers 4.32.1
55
  - Pytorch 2.0.1+cu118
56
- - Datasets 2.14.4
57
  - Tokenizers 0.13.3
 
1
  ---
2
+ license: mit
3
+ base_model: gpt2
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
13
 
14
  # my_finetuned_gpt2_model
15
 
16
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 3.4635
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 3e-05
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
+ - num_epochs: 2
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | 3.435 | 1.0 | 1126 | 3.4640 |
50
+ | 3.3513 | 2.0 | 2252 | 3.4635 |
51
 
52
 
53
  ### Framework versions
54
 
55
+ - Transformers 4.33.1
56
  - Pytorch 2.0.1+cu118
57
+ - Datasets 2.14.5
58
  - Tokenizers 0.13.3
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
- "transformers_version": "4.32.1"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
+ "transformers_version": "4.33.1"
6
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:50c9f6ab6db487939e4e5c98692f00c7b541a22d68a9f2b0aca2d4ade9c9cb4b
3
  size 497807197
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23ba3d93407fafc670783c136fd66b19d25ca066990e9b36a26d1ec10b475879
3
  size 497807197