--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: beit-base-patch16-224 results: [] --- # beit-base-patch16-224 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co./microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5338 - Accuracy: 0.7165 - Precision: 0.7127 - Recall: 0.7165 - F1 Score: 0.7139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:| | No log | 0.8 | 2 | 0.7127 | 0.5686 | 0.3992 | 0.5686 | 0.4691 | | No log | 2.0 | 5 | 0.5967 | 0.6863 | 0.7053 | 0.6863 | 0.6139 | | No log | 2.8 | 7 | 0.5384 | 0.7843 | 0.7801 | 0.7843 | 0.7792 | | No log | 4.0 | 10 | 0.6429 | 0.6078 | 0.6547 | 0.6078 | 0.6164 | | No log | 4.8 | 12 | 0.6321 | 0.7255 | 0.7205 | 0.7255 | 0.7011 | | No log | 6.0 | 15 | 0.6473 | 0.7255 | 0.7164 | 0.7255 | 0.7095 | | No log | 6.8 | 17 | 0.7575 | 0.6863 | 0.6694 | 0.6863 | 0.6584 | | No log | 8.0 | 20 | 0.9926 | 0.7255 | 0.7312 | 0.7255 | 0.6908 | | No log | 8.8 | 22 | 0.9139 | 0.7255 | 0.7205 | 0.7255 | 0.7011 | | No log | 10.0 | 25 | 1.0884 | 0.7059 | 0.6937 | 0.7059 | 0.6845 | | No log | 10.8 | 27 | 1.2796 | 0.7451 | 0.7521 | 0.7451 | 0.7179 | | 0.287 | 12.0 | 30 | 1.3326 | 0.6863 | 0.6704 | 0.6863 | 0.6680 | | 0.287 | 12.8 | 32 | 1.5649 | 0.7255 | 0.7205 | 0.7255 | 0.7011 | | 0.287 | 14.0 | 35 | 1.7452 | 0.7255 | 0.7205 | 0.7255 | 0.7011 | | 0.287 | 14.8 | 37 | 1.7826 | 0.7255 | 0.7205 | 0.7255 | 0.7011 | | 0.287 | 16.0 | 40 | 1.9538 | 0.7255 | 0.7312 | 0.7255 | 0.6908 | | 0.287 | 16.8 | 42 | 1.8850 | 0.6863 | 0.6694 | 0.6863 | 0.6584 | | 0.287 | 18.0 | 45 | 1.7633 | 0.6863 | 0.6739 | 0.6863 | 0.6756 | | 0.287 | 18.8 | 47 | 1.7925 | 0.7059 | 0.6940 | 0.7059 | 0.6925 | | 0.287 | 20.0 | 50 | 2.1156 | 0.7255 | 0.7312 | 0.7255 | 0.6908 | | 0.287 | 20.8 | 52 | 2.0156 | 0.7255 | 0.7205 | 0.7255 | 0.7011 | | 0.287 | 22.0 | 55 | 1.8471 | 0.7255 | 0.7164 | 0.7255 | 0.7095 | | 0.287 | 22.8 | 57 | 1.7831 | 0.7647 | 0.7593 | 0.7647 | 0.7567 | | 0.0041 | 24.0 | 60 | 1.7628 | 0.7647 | 0.7593 | 0.7647 | 0.7567 | | 0.0041 | 24.8 | 62 | 1.8077 | 0.7451 | 0.7382 | 0.7451 | 0.7335 | | 0.0041 | 26.0 | 65 | 1.8068 | 0.7843 | 0.7823 | 0.7843 | 0.7745 | | 0.0041 | 26.8 | 67 | 1.7925 | 0.7647 | 0.7593 | 0.7647 | 0.7567 | | 0.0041 | 28.0 | 70 | 1.7721 | 0.7843 | 0.7823 | 0.7843 | 0.7745 | | 0.0041 | 28.8 | 72 | 1.7919 | 0.7647 | 0.7624 | 0.7647 | 0.7510 | | 0.0041 | 30.0 | 75 | 1.9588 | 0.7451 | 0.7521 | 0.7451 | 0.7179 | | 0.0041 | 30.8 | 77 | 1.9200 | 0.7451 | 0.7521 | 0.7451 | 0.7179 | | 0.0041 | 32.0 | 80 | 1.7746 | 0.7451 | 0.7521 | 0.7451 | 0.7179 | | 0.0041 | 32.8 | 82 | 1.7253 | 0.7647 | 0.7624 | 0.7647 | 0.7510 | | 0.0041 | 34.0 | 85 | 1.6992 | 0.7451 | 0.7382 | 0.7451 | 0.7335 | | 0.0041 | 34.8 | 87 | 1.6938 | 0.7451 | 0.7382 | 0.7451 | 0.7335 | | 0.0031 | 36.0 | 90 | 1.7014 | 0.7451 | 0.7382 | 0.7451 | 0.7335 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2