Kyungmin Jeon commited on
Commit
3ee4c7b
1 Parent(s): 0e8aec4

End of training

Browse files
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: gogamza/kobart-base-v2
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: KoBART_base_v2-trial
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # KoBART_base_v2-trial
15
+
16
+ This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1815
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 0.0005
38
+ - train_batch_size: 64
39
+ - eval_batch_size: 64
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: cosine
43
+ - lr_scheduler_warmup_steps: 20
44
+ - num_epochs: 3
45
+ - mixed_precision_training: Native AMP
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 2.4147 | 0.11 | 50 | 0.5490 |
52
+ | 0.5457 | 0.22 | 100 | 0.4810 |
53
+ | 0.4642 | 0.32 | 150 | 0.3971 |
54
+ | 0.4364 | 0.43 | 200 | 0.3955 |
55
+ | 0.4111 | 0.54 | 250 | 0.3851 |
56
+ | 0.3888 | 0.65 | 300 | 0.3438 |
57
+ | 0.3586 | 0.76 | 350 | 0.3290 |
58
+ | 0.3304 | 0.87 | 400 | 0.3201 |
59
+ | 0.3337 | 0.97 | 450 | 0.2992 |
60
+ | 0.2677 | 1.08 | 500 | 0.3161 |
61
+ | 0.2576 | 1.19 | 550 | 0.2981 |
62
+ | 0.2467 | 1.3 | 600 | 0.2846 |
63
+ | 0.2369 | 1.41 | 650 | 0.2674 |
64
+ | 0.226 | 1.52 | 700 | 0.2529 |
65
+ | 0.2204 | 1.62 | 750 | 0.2446 |
66
+ | 0.204 | 1.73 | 800 | 0.2400 |
67
+ | 0.2071 | 1.84 | 850 | 0.2262 |
68
+ | 0.1911 | 1.95 | 900 | 0.2153 |
69
+ | 0.1591 | 2.06 | 950 | 0.2121 |
70
+ | 0.1338 | 2.16 | 1000 | 0.2090 |
71
+ | 0.1312 | 2.27 | 1050 | 0.1986 |
72
+ | 0.1336 | 2.38 | 1100 | 0.1947 |
73
+ | 0.1205 | 2.49 | 1150 | 0.1903 |
74
+ | 0.1162 | 2.6 | 1200 | 0.1867 |
75
+ | 0.1187 | 2.71 | 1250 | 0.1840 |
76
+ | 0.1171 | 2.81 | 1300 | 0.1821 |
77
+ | 0.1149 | 2.92 | 1350 | 0.1815 |
78
+
79
+
80
+ ### Framework versions
81
+
82
+ - Transformers 4.36.0
83
+ - Pytorch 2.0.1+cu117
84
+ - Datasets 2.15.0
85
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "decoder_start_token_id": 1,
5
+ "eos_token_id": 1,
6
+ "forced_eos_token_id": 1,
7
+ "pad_token_id": 3,
8
+ "transformers_version": "4.36.0"
9
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f544c8e59c0963c8eead8c6c55a1d0ea83a00d30d99babf798235052ef64f3b
3
  size 495589768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b34874bc55cf751b3eabe532f23280afb152403cf7dd7ada7277ed3dfc7c9079
3
  size 495589768
runs/Dec19_11-09-00_e64a5fb166fb/events.out.tfevents.1702951741.e64a5fb166fb.8679.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e40bac29d081b0d51ad241cfc9f452713df01bc33c61b37fea8488f2fe8d076
3
- size 16554
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a086ed300fa944ebbe7c5afc4cd791dc06f161b0661e853a221f7f9bc976294d
3
+ size 17179