hkivancoral's picture
End of training
344d0d1
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_tiny_sgd_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7245409015025042

smids_3x_deit_tiny_sgd_0001_fold1

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6481
  • Accuracy: 0.7245

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.233 1.0 226 1.1760 0.3823
1.1228 2.0 452 1.1140 0.4023
1.086 3.0 678 1.0822 0.4174
1.0737 4.0 904 1.0561 0.4474
1.0492 5.0 1130 1.0319 0.4558
1.0539 6.0 1356 1.0095 0.4858
0.9458 7.0 1582 0.9879 0.5008
0.9572 8.0 1808 0.9681 0.5092
0.9484 9.0 2034 0.9485 0.5242
0.9168 10.0 2260 0.9300 0.5359
0.9317 11.0 2486 0.9125 0.5476
0.8523 12.0 2712 0.8952 0.5609
0.828 13.0 2938 0.8797 0.5843
0.8551 14.0 3164 0.8640 0.6010
0.8431 15.0 3390 0.8498 0.6127
0.7665 16.0 3616 0.8360 0.6177
0.7183 17.0 3842 0.8226 0.6210
0.7754 18.0 4068 0.8101 0.6344
0.7132 19.0 4294 0.7985 0.6394
0.7077 20.0 4520 0.7861 0.6544
0.6887 21.0 4746 0.7746 0.6628
0.7156 22.0 4972 0.7640 0.6661
0.7205 23.0 5198 0.7542 0.6761
0.6924 24.0 5424 0.7447 0.6811
0.668 25.0 5650 0.7359 0.6811
0.7303 26.0 5876 0.7276 0.6878
0.6039 27.0 6102 0.7198 0.6945
0.6316 28.0 6328 0.7126 0.6962
0.5808 29.0 6554 0.7060 0.6962
0.7521 30.0 6780 0.6997 0.7028
0.6067 31.0 7006 0.6939 0.7045
0.617 32.0 7232 0.6885 0.7062
0.5752 33.0 7458 0.6837 0.7062
0.5524 34.0 7684 0.6791 0.7078
0.645 35.0 7910 0.6750 0.7129
0.5855 36.0 8136 0.6712 0.7145
0.5981 37.0 8362 0.6677 0.7162
0.6026 38.0 8588 0.6646 0.7179
0.6372 39.0 8814 0.6617 0.7195
0.561 40.0 9040 0.6592 0.7179
0.5719 41.0 9266 0.6570 0.7179
0.5709 42.0 9492 0.6550 0.7195
0.6421 43.0 9718 0.6533 0.7212
0.5531 44.0 9944 0.6518 0.7245
0.6016 45.0 10170 0.6506 0.7245
0.6135 46.0 10396 0.6496 0.7245
0.5923 47.0 10622 0.6489 0.7245
0.5752 48.0 10848 0.6484 0.7245
0.5457 49.0 11074 0.6481 0.7245
0.586 50.0 11300 0.6481 0.7245

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.1+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2