hkivancoral's picture
End of training
d726b3b
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_tiny_rms_001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.765

smids_1x_deit_tiny_rms_001_fold5

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8052
  • Accuracy: 0.765

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.2535 1.0 75 0.9734 0.4717
1.0172 2.0 150 0.8857 0.5217
0.9205 3.0 225 0.8219 0.5633
0.8404 4.0 300 0.8833 0.54
0.8125 5.0 375 0.7752 0.615
0.8375 6.0 450 0.7791 0.6133
0.7706 7.0 525 0.7651 0.6433
0.6843 8.0 600 0.7674 0.6083
0.717 9.0 675 0.7318 0.655
0.6266 10.0 750 0.7160 0.6867
0.674 11.0 825 0.6761 0.69
0.6618 12.0 900 0.7236 0.6433
0.6204 13.0 975 0.7093 0.6733
0.6403 14.0 1050 0.6526 0.7133
0.5728 15.0 1125 0.7313 0.6617
0.5566 16.0 1200 0.6152 0.7317
0.5735 17.0 1275 0.6901 0.7083
0.6111 18.0 1350 0.6429 0.7317
0.6075 19.0 1425 0.6044 0.7533
0.5675 20.0 1500 0.5922 0.7633
0.4747 21.0 1575 0.6118 0.7483
0.5157 22.0 1650 0.6322 0.7383
0.4995 23.0 1725 0.6300 0.745
0.4632 24.0 1800 0.6076 0.74
0.4596 25.0 1875 0.6047 0.7733
0.4702 26.0 1950 0.6096 0.7633
0.5043 27.0 2025 0.6045 0.7567
0.5051 28.0 2100 0.5905 0.75
0.4664 29.0 2175 0.6085 0.7567
0.3949 30.0 2250 0.6634 0.76
0.3708 31.0 2325 0.6461 0.7667
0.3964 32.0 2400 0.6482 0.7617
0.3827 33.0 2475 0.6696 0.76
0.3422 34.0 2550 0.6799 0.765
0.3716 35.0 2625 0.7307 0.7767
0.3007 36.0 2700 0.7490 0.7583
0.2019 37.0 2775 0.8838 0.7533
0.232 38.0 2850 0.8738 0.76
0.221 39.0 2925 0.8842 0.7733
0.1875 40.0 3000 1.0078 0.7383
0.203 41.0 3075 1.0476 0.7567
0.1699 42.0 3150 1.0739 0.7567
0.171 43.0 3225 1.1644 0.7417
0.1205 44.0 3300 1.2501 0.7533
0.0811 45.0 3375 1.2967 0.755
0.0202 46.0 3450 1.5619 0.745
0.0237 47.0 3525 1.5862 0.7617
0.0127 48.0 3600 1.6631 0.7667
0.0204 49.0 3675 1.7536 0.7667
0.0042 50.0 3750 1.8052 0.765

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0