File size: 4,907 Bytes
b7d7bb6
b8b845b
b7d7bb6
 
 
 
b8b845b
 
b7d7bb6
 
 
 
b8b845b
 
 
 
 
 
 
 
 
 
 
 
 
 
b7d7bb6
 
 
 
 
 
 
b8b845b
b7d7bb6
b8b845b
 
b7d7bb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b8b845b
 
b7d7bb6
 
 
b8b845b
 
b7d7bb6
 
 
 
 
b8b845b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7d7bb6
 
 
 
b8b845b
b7d7bb6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
  results:
  - task:
      name: Image Classification
      type: image-classification
    dataset:
      name: imagefolder
      type: imagefolder
      config: default
      split: train
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.55
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# image_classification

This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co./google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3640
- Accuracy: 0.55

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1309        | 1.0   | 20   | 1.3481          | 0.4938   |
| 1.0746        | 2.0   | 40   | 1.3706          | 0.475    |
| 1.0367        | 3.0   | 60   | 1.3161          | 0.5375   |
| 0.9814        | 4.0   | 80   | 1.3837          | 0.45     |
| 0.886         | 5.0   | 100  | 1.3633          | 0.4875   |
| 0.8096        | 6.0   | 120  | 1.3045          | 0.5125   |
| 0.7669        | 7.0   | 140  | 1.3903          | 0.4938   |
| 0.708         | 8.0   | 160  | 1.2867          | 0.5125   |
| 0.6265        | 9.0   | 180  | 1.2244          | 0.5625   |
| 0.6191        | 10.0  | 200  | 1.3461          | 0.525    |
| 0.5598        | 11.0  | 220  | 1.3266          | 0.5625   |
| 0.4667        | 12.0  | 240  | 1.3050          | 0.5563   |
| 0.4613        | 13.0  | 260  | 1.3329          | 0.5375   |
| 0.4268        | 14.0  | 280  | 1.4020          | 0.5312   |
| 0.4256        | 15.0  | 300  | 1.3770          | 0.5188   |
| 0.3727        | 16.0  | 320  | 1.3655          | 0.5188   |
| 0.316         | 17.0  | 340  | 1.3642          | 0.5188   |
| 0.3223        | 18.0  | 360  | 1.2535          | 0.5938   |
| 0.3064        | 19.0  | 380  | 1.4173          | 0.4875   |
| 0.2866        | 20.0  | 400  | 1.3343          | 0.5625   |
| 0.2781        | 21.0  | 420  | 1.5072          | 0.4813   |
| 0.3027        | 22.0  | 440  | 1.5067          | 0.5125   |
| 0.26          | 23.0  | 460  | 1.4456          | 0.5687   |
| 0.2156        | 24.0  | 480  | 1.4825          | 0.525    |
| 0.1908        | 25.0  | 500  | 1.5369          | 0.5375   |
| 0.213         | 26.0  | 520  | 1.5397          | 0.5188   |
| 0.241         | 27.0  | 540  | 1.4804          | 0.5125   |
| 0.1974        | 28.0  | 560  | 1.5786          | 0.5062   |
| 0.225         | 29.0  | 580  | 1.4677          | 0.5375   |
| 0.2459        | 30.0  | 600  | 1.5392          | 0.5312   |
| 0.2146        | 31.0  | 620  | 1.6734          | 0.4625   |
| 0.1891        | 32.0  | 640  | 1.5012          | 0.55     |
| 0.2231        | 33.0  | 660  | 1.6265          | 0.5      |
| 0.1903        | 34.0  | 680  | 1.5405          | 0.5312   |
| 0.1852        | 35.0  | 700  | 1.6295          | 0.5      |
| 0.1768        | 36.0  | 720  | 1.5758          | 0.5375   |
| 0.1486        | 37.0  | 740  | 1.6176          | 0.5188   |
| 0.1814        | 38.0  | 760  | 1.5107          | 0.5375   |
| 0.1642        | 39.0  | 780  | 1.5315          | 0.55     |
| 0.1822        | 40.0  | 800  | 1.6309          | 0.525    |
| 0.1819        | 41.0  | 820  | 1.7033          | 0.4938   |
| 0.1326        | 42.0  | 840  | 1.6107          | 0.5437   |
| 0.1452        | 43.0  | 860  | 1.6219          | 0.55     |
| 0.128         | 44.0  | 880  | 1.4348          | 0.5813   |
| 0.1103        | 45.0  | 900  | 1.6185          | 0.5687   |
| 0.1386        | 46.0  | 920  | 1.5848          | 0.5312   |
| 0.1021        | 47.0  | 940  | 1.6036          | 0.5563   |
| 0.1414        | 48.0  | 960  | 1.5455          | 0.575    |
| 0.1989        | 49.0  | 980  | 1.5955          | 0.525    |
| 0.1458        | 50.0  | 1000 | 1.5511          | 0.55     |


### Framework versions

- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1