Dhika commited on
Commit
45d2cb3
1 Parent(s): 6745636

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-classification
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: raildefectfft1
12
+ results:
13
+ - task:
14
+ name: Image Classification
15
+ type: image-classification
16
+ dataset:
17
+ name: defect
18
+ type: imagefolder
19
+ config: Dhika--defectfft
20
+ split: validation
21
+ args: Dhika--defectfft
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.7914285714285715
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # raildefectfft1
32
+
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the defect dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.7259
36
+ - Accuracy: 0.7914
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 0.0002
56
+ - train_batch_size: 30
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - num_epochs: 30
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | 1.3927 | 0.67 | 10 | 1.1308 | 0.6429 |
68
+ | 0.8111 | 1.33 | 20 | 0.9788 | 0.6629 |
69
+ | 0.513 | 2.0 | 30 | 0.7938 | 0.74 |
70
+ | 0.2943 | 2.67 | 40 | 0.8517 | 0.7343 |
71
+ | 0.2029 | 3.33 | 50 | 0.7300 | 0.7686 |
72
+ | 0.1629 | 4.0 | 60 | 0.7259 | 0.7914 |
73
+ | 0.1131 | 4.67 | 70 | 0.9103 | 0.7314 |
74
+ | 0.0955 | 5.33 | 80 | 0.8504 | 0.7657 |
75
+ | 0.0547 | 6.0 | 90 | 1.0702 | 0.72 |
76
+ | 0.0489 | 6.67 | 100 | 1.1708 | 0.6971 |
77
+ | 0.0382 | 7.33 | 110 | 1.2376 | 0.6943 |
78
+ | 0.0356 | 8.0 | 120 | 1.3361 | 0.6857 |
79
+ | 0.0311 | 8.67 | 130 | 1.1809 | 0.7229 |
80
+ | 0.0346 | 9.33 | 140 | 1.3405 | 0.7086 |
81
+ | 0.0378 | 10.0 | 150 | 1.1800 | 0.7171 |
82
+ | 0.0326 | 10.67 | 160 | 1.1292 | 0.7343 |
83
+ | 0.0319 | 11.33 | 170 | 1.0885 | 0.7371 |
84
+ | 0.0347 | 12.0 | 180 | 1.4550 | 0.6771 |
85
+ | 0.0283 | 12.67 | 190 | 1.1957 | 0.7314 |
86
+ | 0.0336 | 13.33 | 200 | 1.4648 | 0.6743 |
87
+ | 0.0175 | 14.0 | 210 | 1.4927 | 0.6771 |
88
+ | 0.0167 | 14.67 | 220 | 1.3760 | 0.7057 |
89
+ | 0.0149 | 15.33 | 230 | 1.2464 | 0.7229 |
90
+ | 0.0154 | 16.0 | 240 | 1.2553 | 0.7257 |
91
+ | 0.0135 | 16.67 | 250 | 1.2768 | 0.7314 |
92
+ | 0.0133 | 17.33 | 260 | 1.2857 | 0.7343 |
93
+ | 0.0122 | 18.0 | 270 | 1.2905 | 0.7314 |
94
+ | 0.0121 | 18.67 | 280 | 1.2929 | 0.7314 |
95
+ | 0.0115 | 19.33 | 290 | 1.2958 | 0.7314 |
96
+ | 0.0111 | 20.0 | 300 | 1.2985 | 0.7314 |
97
+ | 0.011 | 20.67 | 310 | 1.3020 | 0.7343 |
98
+ | 0.0103 | 21.33 | 320 | 1.3051 | 0.7371 |
99
+ | 0.0103 | 22.0 | 330 | 1.3075 | 0.7371 |
100
+ | 0.0104 | 22.67 | 340 | 1.3098 | 0.7371 |
101
+ | 0.0096 | 23.33 | 350 | 1.3128 | 0.7371 |
102
+ | 0.0095 | 24.0 | 360 | 1.3154 | 0.7371 |
103
+ | 0.0096 | 24.67 | 370 | 1.3162 | 0.7371 |
104
+ | 0.0093 | 25.33 | 380 | 1.3183 | 0.7371 |
105
+ | 0.0091 | 26.0 | 390 | 1.3200 | 0.7371 |
106
+ | 0.0092 | 26.67 | 400 | 1.3213 | 0.7371 |
107
+ | 0.0089 | 27.33 | 410 | 1.3219 | 0.7371 |
108
+ | 0.0092 | 28.0 | 420 | 1.3224 | 0.7371 |
109
+ | 0.0089 | 28.67 | 430 | 1.3228 | 0.7371 |
110
+ | 0.0089 | 29.33 | 440 | 1.3231 | 0.7371 |
111
+ | 0.0089 | 30.0 | 450 | 1.3233 | 0.7371 |
112
+
113
+
114
+ ### Framework versions
115
+
116
+ - Transformers 4.30.1
117
+ - Pytorch 2.0.1+cu118
118
+ - Datasets 2.12.0
119
+ - Tokenizers 0.13.3