wmc_v2_vit_base_wm811k_cls_contra_learning_0916_9cls
This model is a fine-tuned version of google/vit-base-patch16-224 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1013
- Accuracy: 0.9670
- Precision: 0.9209
- Recall: 0.8649
- F1: 0.8808
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|---|---|---|
0.3763 | 0.1079 | 100 | 0.9646 | 0.6825 | 0.1404 | 0.1291 | 0.1179 |
0.2651 | 0.2158 | 200 | 0.6134 | 0.7668 | 0.3945 | 0.2648 | 0.2505 |
0.1556 | 0.3237 | 300 | 0.2849 | 0.9183 | 0.6474 | 0.5500 | 0.5700 |
0.1999 | 0.4316 | 400 | 0.2655 | 0.9021 | 0.7646 | 0.5318 | 0.5426 |
0.1746 | 0.5395 | 500 | 0.2362 | 0.9086 | 0.7687 | 0.6036 | 0.6230 |
0.1733 | 0.6474 | 600 | 0.2026 | 0.9509 | 0.7935 | 0.7895 | 0.7860 |
0.1048 | 0.7553 | 700 | 0.1498 | 0.9563 | 0.8978 | 0.7432 | 0.7662 |
0.1751 | 0.8632 | 800 | 0.1688 | 0.9495 | 0.8475 | 0.7802 | 0.7727 |
0.1087 | 0.9711 | 900 | 0.1966 | 0.9220 | 0.8840 | 0.6922 | 0.6952 |
0.1367 | 1.0790 | 1000 | 0.1605 | 0.9423 | 0.8138 | 0.8021 | 0.7573 |
0.1251 | 1.1869 | 1100 | 0.1698 | 0.9313 | 0.7926 | 0.8010 | 0.7637 |
0.1383 | 1.2948 | 1200 | 0.1252 | 0.9625 | 0.8940 | 0.8389 | 0.8525 |
0.1173 | 1.4028 | 1300 | 0.1372 | 0.9476 | 0.8857 | 0.7698 | 0.7774 |
0.1014 | 1.5107 | 1400 | 0.1104 | 0.9655 | 0.9173 | 0.8072 | 0.8257 |
0.1073 | 1.6186 | 1500 | 0.1222 | 0.9651 | 0.8932 | 0.8670 | 0.8792 |
0.1093 | 1.7265 | 1600 | 0.1270 | 0.9517 | 0.8591 | 0.8431 | 0.8316 |
0.0832 | 1.8344 | 1700 | 0.1128 | 0.9645 | 0.9080 | 0.8533 | 0.8707 |
0.0972 | 1.9423 | 1800 | 0.1040 | 0.9704 | 0.9309 | 0.8473 | 0.8744 |
0.0771 | 2.0502 | 1900 | 0.1116 | 0.9602 | 0.8525 | 0.8643 | 0.8438 |
0.1073 | 2.1581 | 2000 | 0.1096 | 0.9645 | 0.9117 | 0.8557 | 0.8684 |
0.0997 | 2.2660 | 2100 | 0.1022 | 0.9708 | 0.9292 | 0.8826 | 0.9014 |
0.089 | 2.3739 | 2200 | 0.1032 | 0.9691 | 0.9104 | 0.8785 | 0.8861 |
0.0688 | 2.4818 | 2300 | 0.1046 | 0.9652 | 0.9195 | 0.8446 | 0.8638 |
0.0894 | 2.5897 | 2400 | 0.0933 | 0.9727 | 0.9006 | 0.8957 | 0.8956 |
0.0691 | 2.6976 | 2500 | 0.0929 | 0.9714 | 0.9093 | 0.8807 | 0.8886 |
0.0903 | 2.8055 | 2600 | 0.1017 | 0.9666 | 0.9229 | 0.8679 | 0.8835 |
0.0582 | 2.9134 | 2700 | 0.1013 | 0.9670 | 0.9209 | 0.8649 | 0.8808 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for Niraya666/wmc_v2_vit_base_wm811k_cls_contra_learning_0916_9cls
Base model
google/vit-base-patch16-224