jysh1023 commited on
Commit
9278ae6
·
1 Parent(s): ec34a87

End of training

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model_cache/15728833216685760104.blob filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -5,9 +5,24 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - glue
 
 
8
  model-index:
9
  - name: bert_uncased_L-6_H-768_A-12-QAT
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,6 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
16
  # bert_uncased_L-6_H-768_A-12-QAT
17
 
18
  This model is a fine-tuned version of [google/bert_uncased_L-6_H-768_A-12](https://huggingface.co/google/bert_uncased_L-6_H-768_A-12) on the glue dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -34,16 +52,26 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 5e-05
38
- - train_batch_size: 8
39
- - eval_batch_size: 8
40
- - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 1.0
 
44
 
45
  ### Training results
46
 
 
 
 
 
 
 
 
 
 
47
 
48
 
49
  ### Framework versions
 
5
  - generated_from_trainer
6
  datasets:
7
  - glue
8
+ metrics:
9
+ - accuracy
10
  model-index:
11
  - name: bert_uncased_L-6_H-768_A-12-QAT
12
+ results:
13
+ - task:
14
+ name: Text Classification
15
+ type: text-classification
16
+ dataset:
17
+ name: glue
18
+ type: glue
19
+ config: sst2
20
+ split: validation
21
+ args: sst2
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8176605504587156
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
  # bert_uncased_L-6_H-768_A-12-QAT
32
 
33
  This model is a fine-tuned version of [google/bert_uncased_L-6_H-768_A-12](https://huggingface.co/google/bert_uncased_L-6_H-768_A-12) on the glue dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.7339
36
+ - Accuracy: 0.8177
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 6e-05
56
+ - train_batch_size: 128
57
+ - eval_batch_size: 128
58
+ - seed: 33
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 7
62
+ - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 0.3055 | 1.0 | 8 | 0.4989 | 0.8303 |
69
+ | 0.192 | 2.0 | 16 | 0.4659 | 0.8108 |
70
+ | 0.0994 | 3.0 | 24 | 0.5389 | 0.8177 |
71
+ | 0.0324 | 4.0 | 32 | 0.7313 | 0.8096 |
72
+ | 0.0164 | 5.0 | 40 | 0.6689 | 0.8211 |
73
+ | 0.0137 | 6.0 | 48 | 0.7148 | 0.8154 |
74
+ | 0.0041 | 7.0 | 56 | 0.7339 | 0.8177 |
75
 
76
 
77
  ### Framework versions
logs/events.out.tfevents.1700304292.1d5d6d420ef6.77096.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:322ded7f2843b0abaf337a885b7d389bb98c563f342338afe09ecb34cd1d9171
3
+ size 5003
logs/events.out.tfevents.1700304317.1d5d6d420ef6.77096.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9866b53043cb165c886efc073745bb26e0812ce141907e253c82c52a26b9781
3
+ size 4323
logs/events.out.tfevents.1700304330.1d5d6d420ef6.77096.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:144a67ebe0a9f52e1db0b4e87af65066e576b80eb17ba57f944b97503b517d91
3
+ size 4323
logs/events.out.tfevents.1700304346.1d5d6d420ef6.77096.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04c4a004b7236045b031a77d4cad92496f36c3aed5d4f93c3413a572a1e70d14
3
+ size 5142
logs/events.out.tfevents.1700304416.1d5d6d420ef6.77096.4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5b066e908d6fed783b4f6f19289beb183cce717d31b912323e407b185d2e87c
3
+ size 4323
logs/events.out.tfevents.1700304434.1d5d6d420ef6.77096.5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de4d7c3f72d8a5fe52813c642eabf20c60551c0bc6e74e31c40861e2770a9993
3
+ size 4323
logs/events.out.tfevents.1700304442.1d5d6d420ef6.77096.6 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3ab9f0d42ae8e7c804d7a40680c091845bbef002205053e15eeb3bf69f86f04
3
+ size 4323
logs/events.out.tfevents.1700304454.1d5d6d420ef6.77096.7 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d6c78dfe7b625332a74865eb75255a895828b313a88ae53046a6d8255940205
3
+ size 4323
logs/events.out.tfevents.1700304463.1d5d6d420ef6.77096.8 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2a08c4f20c438bc6b24128c2252930fd755a3d85c12de8bdefb15a651c51462
3
+ size 5142
logs/events.out.tfevents.1700304494.1d5d6d420ef6.77096.9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75f742df241fcb000d9e7a6ef66fabe7e10e9078931da2932db6e4139a546544
3
+ size 4184
logs/events.out.tfevents.1700304504.1d5d6d420ef6.77096.10 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cd46285acc9b5877bf9c22df44d515ba66653a140b8a1ffa30c053deacfb313
3
+ size 4184
logs/events.out.tfevents.1700304512.1d5d6d420ef6.77096.11 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67eca24a63de7172f7c317cd7ba565028ab46801ddca7e3515a538e73d9d6cdd
3
+ size 5142
logs/events.out.tfevents.1700304548.1d5d6d420ef6.77096.12 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2135a2715abe0984132f9037e8d366fa17770501f07208721cc7b953fd600474
3
+ size 7968
model_cache/15728833216685760104.blob ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:603d561f40663732d7ad844fc09647db372b00e39d80cd547fc27a0eeafc2cc8
3
+ size 139655433
nncf_output.log CHANGED
@@ -130,3 +130,19 @@ model.nncf.set_original_unbound_forward(fn)
130
  if `fn` has an unbound 0-th `self` argument, or
131
  with model.nncf.temporary_bound_original_forward(fn): ...
132
  if `fn` already had 0-th `self` argument bound or never had it in the first place.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  if `fn` has an unbound 0-th `self` argument, or
131
  with model.nncf.temporary_bound_original_forward(fn): ...
132
  if `fn` already had 0-th `self` argument bound or never had it in the first place.
133
+ WARNING:nncf:You are setting `forward` on an NNCF-processed model object.
134
+ NNCF relies on custom-wrapping the `forward` call in order to function properly.
135
+ Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior.
136
+ If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling:
137
+ model.nncf.set_original_unbound_forward(fn)
138
+ if `fn` has an unbound 0-th `self` argument, or
139
+ with model.nncf.temporary_bound_original_forward(fn): ...
140
+ if `fn` already had 0-th `self` argument bound or never had it in the first place.
141
+ WARNING:nncf:You are setting `forward` on an NNCF-processed model object.
142
+ NNCF relies on custom-wrapping the `forward` call in order to function properly.
143
+ Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior.
144
+ If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling:
145
+ model.nncf.set_original_unbound_forward(fn)
146
+ if `fn` has an unbound 0-th `self` argument, or
147
+ with model.nncf.temporary_bound_original_forward(fn): ...
148
+ if `fn` already had 0-th `self` argument bound or never had it in the first place.
openvino_model.xml CHANGED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9714306a601a5684ca4e944f902b17302cd9c4b25181704adf26efbf008cee23
3
- size 268184942
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2eebddf26a1178069ab8ad0802c5be97b96cdd0f7697e747ba4a9f6e18a791e
3
+ size 267862062
runs/Nov18_10-27-10_1d5d6d420ef6/events.out.tfevents.1700303372.1d5d6d420ef6.278.31 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c06f1bce5b1365a22f0412e7af0666bc2d1628c7ca6ecc9c2dd4d31c6817d498
3
+ size 405
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a946ae40b84094d79d9657770c80899b58573ab90fae3025de9580edc1380fc6
3
  size 4600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a1c7e76b5ded3ee7548593ad5aa7f2b3f20c00381ded97317ed1671fdd75f79
3
  size 4600