DifeiT commited on
Commit
7011be3
1 Parent(s): 1bed954

End of training

Browse files
Files changed (2) hide show
  1. README.md +57 -10
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8126984126984127
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.7848
36
- - Accuracy: 0.8127
37
 
38
  ## Model description
39
 
@@ -61,20 +61,67 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 3
65
 
66
  ### Training results
67
 
68
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 1.5024 | 1.0 | 10 | 1.0679 | 0.8127 |
71
- | 0.8734 | 2.0 | 20 | 0.8166 | 0.8127 |
72
- | 0.7213 | 3.0 | 30 | 0.7848 | 0.8127 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
 
75
  ### Framework versions
76
 
77
  - Transformers 4.33.2
78
- - Pytorch 1.13.1+cpu
79
  - Datasets 2.14.5
80
  - Tokenizers 0.13.3
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.6151724137931035
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.2164
36
+ - Accuracy: 0.6152
37
 
38
  ## Model description
39
 
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 50
65
 
66
  ### Training results
67
 
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
70
+ | 1.5655 | 1.0 | 238 | 1.5235 | 0.4039 |
71
+ | 1.3848 | 2.0 | 477 | 1.3622 | 0.4692 |
72
+ | 1.2812 | 3.0 | 716 | 1.2811 | 0.5150 |
73
+ | 1.2039 | 4.0 | 955 | 1.1795 | 0.5556 |
74
+ | 1.1641 | 5.0 | 1193 | 1.1627 | 0.5534 |
75
+ | 1.1961 | 6.0 | 1432 | 1.1393 | 0.5705 |
76
+ | 1.1382 | 7.0 | 1671 | 1.0921 | 0.5804 |
77
+ | 0.9653 | 8.0 | 1910 | 1.0790 | 0.5876 |
78
+ | 0.9346 | 9.0 | 2148 | 1.0727 | 0.5931 |
79
+ | 0.9083 | 10.0 | 2387 | 1.0605 | 0.5994 |
80
+ | 0.8936 | 11.0 | 2626 | 1.0147 | 0.6146 |
81
+ | 0.8504 | 12.0 | 2865 | 1.0849 | 0.5818 |
82
+ | 0.8544 | 13.0 | 3103 | 1.0349 | 0.6052 |
83
+ | 0.7884 | 14.0 | 3342 | 1.0435 | 0.6074 |
84
+ | 0.7974 | 15.0 | 3581 | 1.0082 | 0.6127 |
85
+ | 0.7921 | 16.0 | 3820 | 1.0438 | 0.6017 |
86
+ | 0.709 | 17.0 | 4058 | 1.0484 | 0.6094 |
87
+ | 0.6646 | 18.0 | 4297 | 1.0554 | 0.6221 |
88
+ | 0.6832 | 19.0 | 4536 | 1.0455 | 0.6124 |
89
+ | 0.7076 | 20.0 | 4775 | 1.0905 | 0.6 |
90
+ | 0.7442 | 21.0 | 5013 | 1.1094 | 0.6008 |
91
+ | 0.6332 | 22.0 | 5252 | 1.0777 | 0.6063 |
92
+ | 0.6417 | 23.0 | 5491 | 1.0765 | 0.6141 |
93
+ | 0.6267 | 24.0 | 5730 | 1.1057 | 0.6091 |
94
+ | 0.6082 | 25.0 | 5968 | 1.0962 | 0.6171 |
95
+ | 0.6191 | 26.0 | 6207 | 1.1178 | 0.6039 |
96
+ | 0.5654 | 27.0 | 6446 | 1.1386 | 0.5948 |
97
+ | 0.5776 | 28.0 | 6685 | 1.1121 | 0.6105 |
98
+ | 0.5531 | 29.0 | 6923 | 1.1497 | 0.6030 |
99
+ | 0.6275 | 30.0 | 7162 | 1.1796 | 0.6028 |
100
+ | 0.5373 | 31.0 | 7401 | 1.1306 | 0.6132 |
101
+ | 0.4775 | 32.0 | 7640 | 1.1523 | 0.6058 |
102
+ | 0.5469 | 33.0 | 7878 | 1.1634 | 0.6127 |
103
+ | 0.4934 | 34.0 | 8117 | 1.1853 | 0.616 |
104
+ | 0.5233 | 35.0 | 8356 | 1.2018 | 0.6055 |
105
+ | 0.4896 | 36.0 | 8595 | 1.1585 | 0.6108 |
106
+ | 0.5122 | 37.0 | 8833 | 1.1874 | 0.6146 |
107
+ | 0.4726 | 38.0 | 9072 | 1.1608 | 0.6193 |
108
+ | 0.4372 | 39.0 | 9311 | 1.2403 | 0.6132 |
109
+ | 0.498 | 40.0 | 9550 | 1.1752 | 0.6201 |
110
+ | 0.4813 | 41.0 | 9788 | 1.2005 | 0.6166 |
111
+ | 0.4762 | 42.0 | 10027 | 1.2285 | 0.6022 |
112
+ | 0.4852 | 43.0 | 10266 | 1.2192 | 0.6119 |
113
+ | 0.4332 | 44.0 | 10505 | 1.2391 | 0.6218 |
114
+ | 0.3998 | 45.0 | 10743 | 1.1779 | 0.6196 |
115
+ | 0.4467 | 46.0 | 10982 | 1.2048 | 0.6284 |
116
+ | 0.4332 | 47.0 | 11221 | 1.2302 | 0.6188 |
117
+ | 0.4529 | 48.0 | 11460 | 1.2220 | 0.6188 |
118
+ | 0.4281 | 49.0 | 11698 | 1.2013 | 0.624 |
119
+ | 0.4199 | 49.84 | 11900 | 1.2164 | 0.6152 |
120
 
121
 
122
  ### Framework versions
123
 
124
  - Transformers 4.33.2
125
+ - Pytorch 2.0.1+cu117
126
  - Datasets 2.14.5
127
  - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ff8c4aed63cf38a6ea0b3162b90c1ec313a67fad5202e58b6e36ea171b2d15d
3
  size 343281005
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c636f798308a402ba111cd3fa137b4cc1383fd3ae0f51dd415f2f166a4ec43b
3
  size 343281005