--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: model results: [] --- # model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co./google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1898 - Accuracy: 0.9243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2732 | 0.1 | 100 | 0.3969 | 0.8461 | | 0.2784 | 0.21 | 200 | 0.3714 | 0.8579 | | 0.301 | 0.31 | 300 | 0.3504 | 0.8376 | | 0.2372 | 0.42 | 400 | 0.3391 | 0.8812 | | 0.3136 | 0.52 | 500 | 0.2559 | 0.8967 | | 0.3517 | 0.62 | 600 | 0.4141 | 0.8397 | | 0.3312 | 0.73 | 700 | 0.3043 | 0.8841 | | 0.2515 | 0.83 | 800 | 0.2541 | 0.9062 | | 0.2854 | 0.93 | 900 | 0.2561 | 0.9006 | | 0.2594 | 1.04 | 1000 | 0.2681 | 0.9020 | | 0.177 | 1.14 | 1100 | 0.3406 | 0.8773 | | 0.2717 | 1.25 | 1200 | 0.2266 | 0.9171 | | 0.2197 | 1.35 | 1300 | 0.2080 | 0.9236 | | 0.155 | 1.45 | 1400 | 0.2048 | 0.9236 | | 0.2657 | 1.56 | 1500 | 0.2037 | 0.9256 | | 0.118 | 1.66 | 1600 | 0.2616 | 0.9096 | | 0.1823 | 1.77 | 1700 | 0.2158 | 0.9241 | | 0.2175 | 1.87 | 1800 | 0.2159 | 0.9182 | | 0.143 | 1.97 | 1900 | 0.1898 | 0.9243 | | 0.1051 | 2.08 | 2000 | 0.2308 | 0.9226 | | 0.1963 | 2.18 | 2100 | 0.2354 | 0.9205 | | 0.0524 | 2.28 | 2200 | 0.2298 | 0.9282 | | 0.097 | 2.39 | 2300 | 0.2495 | 0.9241 | | 0.0744 | 2.49 | 2400 | 0.2493 | 0.9194 | | 0.0744 | 2.6 | 2500 | 0.2429 | 0.9323 | | 0.0345 | 2.7 | 2600 | 0.2587 | 0.9252 | | 0.0097 | 2.8 | 2700 | 0.2284 | 0.9265 | | 0.0775 | 2.91 | 2800 | 0.2242 | 0.9321 | | 0.0634 | 3.01 | 2900 | 0.2314 | 0.9286 | | 0.0109 | 3.12 | 3000 | 0.2203 | 0.9338 | | 0.0039 | 3.22 | 3100 | 0.2575 | 0.9358 | | 0.0139 | 3.32 | 3200 | 0.2570 | 0.9356 | | 0.0358 | 3.43 | 3300 | 0.2630 | 0.9335 | | 0.0347 | 3.53 | 3400 | 0.2633 | 0.9358 | | 0.0408 | 3.63 | 3500 | 0.2591 | 0.9335 | | 0.041 | 3.74 | 3600 | 0.2613 | 0.9367 | | 0.004 | 3.84 | 3700 | 0.2587 | 0.9370 | | 0.0389 | 3.95 | 3800 | 0.2535 | 0.9373 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1