hkivancoral's picture
End of training
6253e88
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_5x_deit_tiny_rms_001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6904761904761905

hushem_5x_deit_tiny_rms_001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2679
  • Accuracy: 0.6905

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.1506 1.0 28 2.1514 0.2381
1.4805 2.0 56 1.6187 0.2619
1.4792 3.0 84 1.5112 0.2619
1.5148 4.0 112 1.3546 0.3095
1.3804 5.0 140 1.3723 0.4286
1.4296 6.0 168 1.1490 0.4048
1.1847 7.0 196 1.3299 0.4524
1.1564 8.0 224 1.0799 0.4762
1.0992 9.0 252 1.1631 0.5
1.0863 10.0 280 1.1300 0.4524
1.0126 11.0 308 0.9131 0.5
1.0272 12.0 336 0.9239 0.5
0.9747 13.0 364 0.9521 0.6667
0.9219 14.0 392 0.8729 0.7619
0.8522 15.0 420 0.6286 0.7381
0.8968 16.0 448 0.8515 0.6429
0.8266 17.0 476 0.8301 0.6429
0.8581 18.0 504 1.0046 0.5476
0.8265 19.0 532 0.8082 0.6429
0.8594 20.0 560 0.8196 0.6190
0.7439 21.0 588 0.7591 0.6190
0.7899 22.0 616 0.8303 0.5952
0.8223 23.0 644 0.6299 0.7143
0.8203 24.0 672 0.7361 0.7143
0.7414 25.0 700 0.7251 0.7143
0.6879 26.0 728 0.8771 0.6905
0.8008 27.0 756 0.8469 0.5714
0.7402 28.0 784 0.6058 0.7857
0.7223 29.0 812 0.8210 0.6905
0.7302 30.0 840 0.8614 0.7143
0.7098 31.0 868 0.9312 0.7143
0.7044 32.0 896 0.8159 0.7143
0.7096 33.0 924 0.9197 0.6905
0.6854 34.0 952 0.8631 0.6190
0.7442 35.0 980 0.8324 0.6667
0.6271 36.0 1008 0.8632 0.7381
0.6052 37.0 1036 0.8753 0.7143
0.6189 38.0 1064 1.0917 0.7381
0.5817 39.0 1092 0.9635 0.6429
0.5324 40.0 1120 1.0245 0.6667
0.5312 41.0 1148 1.1733 0.6905
0.5538 42.0 1176 1.0809 0.7143
0.4355 43.0 1204 1.0395 0.6667
0.3909 44.0 1232 1.1631 0.6667
0.301 45.0 1260 1.2110 0.6667
0.3678 46.0 1288 1.2357 0.6905
0.3355 47.0 1316 1.2487 0.7143
0.2983 48.0 1344 1.2713 0.6905
0.2527 49.0 1372 1.2679 0.6905
0.2761 50.0 1400 1.2679 0.6905

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0