hkivancoral's picture
End of training
953d338
|
raw
history blame
4.82 kB
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_adamax_001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6097560975609756

hushem_1x_deit_tiny_adamax_001_fold5

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5757
  • Accuracy: 0.6098

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.5806 0.2683
1.936 2.0 12 1.3732 0.2683
1.936 3.0 18 1.1956 0.5122
1.3784 4.0 24 1.3370 0.2683
1.3756 5.0 30 1.3182 0.4878
1.3756 6.0 36 1.1965 0.3902
1.3121 7.0 42 1.1086 0.4878
1.3121 8.0 48 1.1916 0.4878
1.2758 9.0 54 1.3218 0.2683
1.1549 10.0 60 1.1453 0.4878
1.1549 11.0 66 1.2471 0.4146
1.0642 12.0 72 1.3138 0.5366
1.0642 13.0 78 0.9920 0.4390
0.92 14.0 84 0.9268 0.5854
0.7401 15.0 90 1.0701 0.5122
0.7401 16.0 96 0.9883 0.5366
0.6495 17.0 102 0.8616 0.6098
0.6495 18.0 108 1.1245 0.5610
0.475 19.0 114 1.3207 0.6585
0.3212 20.0 120 1.5923 0.6098
0.3212 21.0 126 2.0857 0.5122
0.1553 22.0 132 2.2171 0.5122
0.1553 23.0 138 2.5933 0.5366
0.1031 24.0 144 2.0291 0.5610
0.1068 25.0 150 2.0073 0.6098
0.1068 26.0 156 2.5546 0.5122
0.0871 27.0 162 2.1934 0.5854
0.0871 28.0 168 2.7013 0.5610
0.1961 29.0 174 2.9538 0.4878
0.0391 30.0 180 2.3781 0.6098
0.0391 31.0 186 2.6823 0.5610
0.0244 32.0 192 2.3033 0.6341
0.0244 33.0 198 2.5112 0.6098
0.0164 34.0 204 2.8134 0.5122
0.0047 35.0 210 2.7611 0.5122
0.0047 36.0 216 2.6509 0.5610
0.0008 37.0 222 2.6009 0.6098
0.0008 38.0 228 2.5852 0.6098
0.0006 39.0 234 2.5782 0.6098
0.0005 40.0 240 2.5761 0.6098
0.0005 41.0 246 2.5753 0.6098
0.0005 42.0 252 2.5757 0.6098
0.0005 43.0 258 2.5757 0.6098
0.0005 44.0 264 2.5757 0.6098
0.0005 45.0 270 2.5757 0.6098
0.0005 46.0 276 2.5757 0.6098
0.0005 47.0 282 2.5757 0.6098
0.0005 48.0 288 2.5757 0.6098
0.0005 49.0 294 2.5757 0.6098
0.0005 50.0 300 2.5757 0.6098

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1