--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: google-vit-base-patch16-224-cartoon-emotion-detection results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder args: default metrics: - name: Accuracy type: accuracy value: 0.8715596330275229 - name: Precision type: precision value: 0.8725197999744695 - name: Recall type: recall value: 0.8715596330275229 - name: F1 type: f1 value: 0.871683140929764 --- # google-vit-base-patch16-224-cartoon-emotion-detection This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co./google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4170 - Accuracy: 0.8716 - Precision: 0.8725 - Recall: 0.8716 - F1: 0.8717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.97 | 8 | 1.0942 | 0.5780 | 0.6102 | 0.5780 | 0.5496 | | 1.3198 | 1.97 | 16 | 0.6914 | 0.7615 | 0.7498 | 0.7615 | 0.7493 | | 0.6694 | 2.97 | 24 | 0.4702 | 0.7890 | 0.7808 | 0.7890 | 0.7781 | | 0.2725 | 3.97 | 32 | 0.3957 | 0.8532 | 0.8514 | 0.8532 | 0.8522 | | 0.1116 | 4.97 | 40 | 0.3428 | 0.8716 | 0.8697 | 0.8716 | 0.8693 | | 0.1116 | 5.97 | 48 | 0.3865 | 0.8532 | 0.8514 | 0.8532 | 0.8522 | | 0.0486 | 6.97 | 56 | 0.3445 | 0.8532 | 0.8495 | 0.8532 | 0.8507 | | 0.0346 | 7.97 | 64 | 0.3554 | 0.8807 | 0.8921 | 0.8807 | 0.8831 | | 0.0304 | 8.97 | 72 | 0.3100 | 0.8624 | 0.8592 | 0.8624 | 0.8605 | | 0.0215 | 9.97 | 80 | 0.3718 | 0.8716 | 0.8700 | 0.8716 | 0.8707 | | 0.0215 | 10.97 | 88 | 0.3946 | 0.8899 | 0.8901 | 0.8899 | 0.8896 | | 0.0201 | 11.97 | 96 | 0.4505 | 0.8532 | 0.8558 | 0.8532 | 0.8524 | | 0.02 | 12.97 | 104 | 0.4543 | 0.8716 | 0.8734 | 0.8716 | 0.8718 | | 0.0181 | 13.97 | 112 | 0.3837 | 0.8899 | 0.8878 | 0.8899 | 0.8884 | | 0.0158 | 14.97 | 120 | 0.3904 | 0.8716 | 0.8676 | 0.8716 | 0.8691 | | 0.0158 | 15.97 | 128 | 0.3881 | 0.9083 | 0.9078 | 0.9083 | 0.9077 | | 0.0147 | 16.97 | 136 | 0.4233 | 0.8807 | 0.8773 | 0.8807 | 0.8785 | | 0.0138 | 17.97 | 144 | 0.4335 | 0.8716 | 0.8700 | 0.8716 | 0.8707 | | 0.0166 | 18.97 | 152 | 0.4492 | 0.8716 | 0.8690 | 0.8716 | 0.8701 | | 0.016 | 19.97 | 160 | 0.4170 | 0.8716 | 0.8725 | 0.8716 | 0.8717 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.13.1+cu117 - Datasets 2.6.1 - Tokenizers 0.11.0