soumya13 commited on
Commit
05d12e1
1 Parent(s): 342fcc6

Training in progress epoch 0

Browse files
Files changed (3) hide show
  1. README.md +7 -106
  2. config.json +20 -20
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -14,10 +14,10 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.0002
18
- - Validation Loss: 0.0000
19
- - Train Accuracy: 1.0
20
- - Epoch: 99
21
 
22
  ## Model description
23
 
@@ -36,118 +36,19 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30900, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
40
  - training_precision: float32
41
 
42
  ### Training results
43
 
44
  | Train Loss | Validation Loss | Train Accuracy | Epoch |
45
  |:----------:|:---------------:|:--------------:|:-----:|
46
- | 2.2048 | 0.8079 | 0.7 | 0 |
47
- | 0.5794 | 0.4310 | 0.9 | 1 |
48
- | 0.3227 | 0.1878 | 0.95 | 2 |
49
- | 0.2545 | 0.1830 | 0.95 | 3 |
50
- | 0.1640 | 0.0833 | 1.0 | 4 |
51
- | 0.1112 | 0.0548 | 1.0 | 5 |
52
- | 0.0704 | 0.0321 | 1.0 | 6 |
53
- | 0.0464 | 0.0233 | 1.0 | 7 |
54
- | 0.0370 | 0.0167 | 1.0 | 8 |
55
- | 0.0761 | 0.0133 | 1.0 | 9 |
56
- | 0.0236 | 0.0110 | 1.0 | 10 |
57
- | 0.0276 | 0.0075 | 1.0 | 11 |
58
- | 0.0329 | 0.0061 | 1.0 | 12 |
59
- | 0.0303 | 0.0050 | 1.0 | 13 |
60
- | 0.0165 | 0.0040 | 1.0 | 14 |
61
- | 0.0104 | 0.0034 | 1.0 | 15 |
62
- | 0.0445 | 0.0027 | 1.0 | 16 |
63
- | 0.0257 | 0.0023 | 1.0 | 17 |
64
- | 0.0117 | 0.0020 | 1.0 | 18 |
65
- | 0.0319 | 0.0016 | 1.0 | 19 |
66
- | 0.0205 | 0.0025 | 1.0 | 20 |
67
- | 0.0259 | 0.0016 | 1.0 | 21 |
68
- | 0.0144 | 0.0010 | 1.0 | 22 |
69
- | 0.0151 | 0.0007 | 1.0 | 23 |
70
- | 0.0256 | 0.0006 | 1.0 | 24 |
71
- | 0.0238 | 0.0005 | 1.0 | 25 |
72
- | 0.0095 | 0.0004 | 1.0 | 26 |
73
- | 0.0143 | 0.0004 | 1.0 | 27 |
74
- | 0.0231 | 0.0004 | 1.0 | 28 |
75
- | 0.0157 | 0.0003 | 1.0 | 29 |
76
- | 0.0208 | 0.0003 | 1.0 | 30 |
77
- | 0.0086 | 0.0003 | 1.0 | 31 |
78
- | 0.0080 | 0.0003 | 1.0 | 32 |
79
- | 0.0116 | 0.0002 | 1.0 | 33 |
80
- | 0.0214 | 0.0002 | 1.0 | 34 |
81
- | 0.0077 | 0.0002 | 1.0 | 35 |
82
- | 0.0083 | 0.0002 | 1.0 | 36 |
83
- | 0.0215 | 0.0002 | 1.0 | 37 |
84
- | 0.0279 | 0.0002 | 1.0 | 38 |
85
- | 0.0011 | 0.0002 | 1.0 | 39 |
86
- | 0.0142 | 0.0002 | 1.0 | 40 |
87
- | 0.0137 | 0.0002 | 1.0 | 41 |
88
- | 0.0223 | 0.0001 | 1.0 | 42 |
89
- | 0.0053 | 0.0001 | 1.0 | 43 |
90
- | 0.0196 | 0.0001 | 1.0 | 44 |
91
- | 0.0135 | 0.0001 | 1.0 | 45 |
92
- | 0.0208 | 0.0001 | 1.0 | 46 |
93
- | 0.0206 | 0.0001 | 1.0 | 47 |
94
- | 0.0188 | 0.0001 | 1.0 | 48 |
95
- | 0.0124 | 0.0001 | 1.0 | 49 |
96
- | 0.0161 | 0.0001 | 1.0 | 50 |
97
- | 0.0125 | 0.0001 | 1.0 | 51 |
98
- | 0.0186 | 0.0001 | 1.0 | 52 |
99
- | 0.0180 | 0.0001 | 1.0 | 53 |
100
- | 0.0068 | 0.0001 | 1.0 | 54 |
101
- | 0.0118 | 0.0001 | 1.0 | 55 |
102
- | 0.0155 | 0.0001 | 1.0 | 56 |
103
- | 0.0200 | 0.0001 | 1.0 | 57 |
104
- | 0.0064 | 0.0001 | 1.0 | 58 |
105
- | 0.0117 | 0.0001 | 1.0 | 59 |
106
- | 0.0007 | 0.0001 | 1.0 | 60 |
107
- | 0.0221 | 0.0001 | 1.0 | 61 |
108
- | 0.0115 | 0.0001 | 1.0 | 62 |
109
- | 0.0062 | 0.0001 | 1.0 | 63 |
110
- | 0.0269 | 0.0001 | 1.0 | 64 |
111
- | 0.0004 | 0.0001 | 1.0 | 65 |
112
- | 0.0161 | 0.0001 | 1.0 | 66 |
113
- | 0.0295 | 0.0001 | 1.0 | 67 |
114
- | 0.0054 | 0.0001 | 1.0 | 68 |
115
- | 0.0078 | 0.0001 | 1.0 | 69 |
116
- | 0.0090 | 0.0001 | 1.0 | 70 |
117
- | 0.0053 | 0.0001 | 1.0 | 71 |
118
- | 0.0200 | 0.0000 | 1.0 | 72 |
119
- | 0.0014 | 0.0000 | 1.0 | 73 |
120
- | 0.0149 | 0.0000 | 1.0 | 74 |
121
- | 0.0054 | 0.0000 | 1.0 | 75 |
122
- | 0.0131 | 0.0000 | 1.0 | 76 |
123
- | 0.0143 | 0.0000 | 1.0 | 77 |
124
- | 0.0003 | 0.0000 | 1.0 | 78 |
125
- | 0.0077 | 0.0000 | 1.0 | 79 |
126
- | 0.0181 | 0.0000 | 1.0 | 80 |
127
- | 0.0179 | 0.0000 | 1.0 | 81 |
128
- | 0.0046 | 0.0000 | 1.0 | 82 |
129
- | 0.0045 | 0.0000 | 1.0 | 83 |
130
- | 0.0044 | 0.0000 | 1.0 | 84 |
131
- | 0.0046 | 0.0000 | 1.0 | 85 |
132
- | 0.0086 | 0.0000 | 1.0 | 86 |
133
- | 0.0126 | 0.0000 | 1.0 | 87 |
134
- | 0.0103 | 0.0000 | 1.0 | 88 |
135
- | 0.0144 | 0.0000 | 1.0 | 89 |
136
- | 0.0122 | 0.0000 | 1.0 | 90 |
137
- | 0.0124 | 0.0000 | 1.0 | 91 |
138
- | 0.0079 | 0.0000 | 1.0 | 92 |
139
- | 0.0080 | 0.0000 | 1.0 | 93 |
140
- | 0.0078 | 0.0000 | 1.0 | 94 |
141
- | 0.0042 | 0.0000 | 1.0 | 95 |
142
- | 0.0044 | 0.0000 | 1.0 | 96 |
143
- | 0.0124 | 0.0000 | 1.0 | 97 |
144
- | 0.0088 | 0.0000 | 1.0 | 98 |
145
- | 0.0002 | 0.0000 | 1.0 | 99 |
146
 
147
 
148
  ### Framework versions
149
 
150
  - Transformers 4.28.1
151
  - TensorFlow 2.12.0
152
- - Datasets 2.11.0
153
  - Tokenizers 0.13.3
 
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 2.1210
18
+ - Validation Loss: 0.7138
19
+ - Train Accuracy: 0.8974
20
+ - Epoch: 0
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4560, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
40
  - training_precision: float32
41
 
42
  ### Training results
43
 
44
  | Train Loss | Validation Loss | Train Accuracy | Epoch |
45
  |:----------:|:---------------:|:--------------:|:-----:|
46
+ | 2.1210 | 0.7138 | 0.8974 | 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
 
49
  ### Framework versions
50
 
51
  - Transformers 4.28.1
52
  - TensorFlow 2.12.0
53
+ - Datasets 2.12.0
54
  - Tokenizers 0.13.3
config.json CHANGED
@@ -9,32 +9,32 @@
9
  "embd_pdrop": 0.1,
10
  "eos_token_id": 50256,
11
  "id2label": {
12
- "0": "lincoln",
13
- "1": "lexus",
14
- "2": "mercedes-benz",
15
- "3": "hyundai",
16
- "4": "jaguar",
17
- "5": "autonomous",
18
- "6": "cruise",
19
  "7": "chrysler",
20
- "8": "chevrolet",
21
  "9": "toyota",
22
- "10": "ford",
23
- "11": "nissan"
24
  },
25
  "initializer_range": 0.02,
26
  "label2id": {
27
- "autonomous": 5,
28
- "chevrolet": 8,
29
  "chrysler": 7,
30
- "cruise": 6,
31
- "ford": 10,
32
- "hyundai": 3,
33
- "jaguar": 4,
34
- "lexus": 1,
35
- "lincoln": 0,
36
- "mercedes-benz": 2,
37
- "nissan": 11,
38
  "toyota": 9
39
  },
40
  "layer_norm_epsilon": 1e-05,
 
9
  "embd_pdrop": 0.1,
10
  "eos_token_id": 50256,
11
  "id2label": {
12
+ "0": "mercedes-benz",
13
+ "1": "hyundai",
14
+ "2": "nissan",
15
+ "3": "cruise",
16
+ "4": "chevrolet",
17
+ "5": "lexus",
18
+ "6": "ford",
19
  "7": "chrysler",
20
+ "8": "autonomous",
21
  "9": "toyota",
22
+ "10": "lincoln",
23
+ "11": "jaguar"
24
  },
25
  "initializer_range": 0.02,
26
  "label2id": {
27
+ "autonomous": 8,
28
+ "chevrolet": 4,
29
  "chrysler": 7,
30
+ "cruise": 3,
31
+ "ford": 6,
32
+ "hyundai": 1,
33
+ "jaguar": 11,
34
+ "lexus": 5,
35
+ "lincoln": 10,
36
+ "mercedes-benz": 0,
37
+ "nissan": 2,
38
  "toyota": 9
39
  },
40
  "layer_norm_epsilon": 1e-05,
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0bd8959773f762fce4cb10effbf5257d6676ad80aa5daeea57c7b4df1e4c7ade
3
  size 497983984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae755ec552657550dd81250124c62f48b9757887b7839aaa1c40a32f19d822d7
3
  size 497983984