hkivancoral's picture
End of training
a1a5554
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_tiny_sgd_00001_fold2
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.3544093178036606

smids_1x_deit_tiny_sgd_00001_fold2

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1866
  • Accuracy: 0.3544

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3496 1.0 75 1.3448 0.3444
1.342 2.0 150 1.3359 0.3444
1.3049 3.0 225 1.3273 0.3428
1.3212 4.0 300 1.3189 0.3428
1.2683 5.0 375 1.3112 0.3394
1.3476 6.0 450 1.3037 0.3394
1.3281 7.0 525 1.2966 0.3378
1.2813 8.0 600 1.2897 0.3394
1.3177 9.0 675 1.2831 0.3394
1.2768 10.0 750 1.2769 0.3394
1.2973 11.0 825 1.2710 0.3394
1.2616 12.0 900 1.2654 0.3428
1.2694 13.0 975 1.2600 0.3428
1.1891 14.0 1050 1.2550 0.3361
1.2441 15.0 1125 1.2502 0.3411
1.211 16.0 1200 1.2456 0.3428
1.247 17.0 1275 1.2413 0.3411
1.2791 18.0 1350 1.2372 0.3411
1.2453 19.0 1425 1.2333 0.3428
1.2386 20.0 1500 1.2296 0.3444
1.2461 21.0 1575 1.2262 0.3461
1.2333 22.0 1650 1.2229 0.3461
1.2716 23.0 1725 1.2198 0.3478
1.2019 24.0 1800 1.2169 0.3461
1.1715 25.0 1875 1.2141 0.3444
1.1932 26.0 1950 1.2116 0.3461
1.2512 27.0 2025 1.2092 0.3444
1.1951 28.0 2100 1.2069 0.3444
1.2421 29.0 2175 1.2047 0.3461
1.1922 30.0 2250 1.2027 0.3478
1.2041 31.0 2325 1.2008 0.3478
1.2208 32.0 2400 1.1991 0.3478
1.1905 33.0 2475 1.1975 0.3478
1.1949 34.0 2550 1.1960 0.3478
1.1944 35.0 2625 1.1946 0.3527
1.1832 36.0 2700 1.1934 0.3561
1.2088 37.0 2775 1.1923 0.3577
1.2643 38.0 2850 1.1913 0.3594
1.2153 39.0 2925 1.1904 0.3561
1.2054 40.0 3000 1.1896 0.3561
1.188 41.0 3075 1.1889 0.3561
1.2171 42.0 3150 1.1883 0.3577
1.1949 43.0 3225 1.1878 0.3577
1.159 44.0 3300 1.1874 0.3561
1.1443 45.0 3375 1.1871 0.3544
1.1683 46.0 3450 1.1869 0.3544
1.2029 47.0 3525 1.1867 0.3544
1.1913 48.0 3600 1.1867 0.3544
1.1814 49.0 3675 1.1866 0.3544
1.1739 50.0 3750 1.1866 0.3544

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0