hkivancoral's picture
End of training
0e94dc0
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_tiny_sgd_00001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.36666666666666664

smids_1x_deit_tiny_sgd_00001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1991
  • Accuracy: 0.3667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3246 1.0 75 1.3568 0.3467
1.3026 2.0 150 1.3478 0.3467
1.3075 3.0 225 1.3392 0.3467
1.3769 4.0 300 1.3310 0.3467
1.2997 5.0 375 1.3231 0.3467
1.2661 6.0 450 1.3158 0.3433
1.2888 7.0 525 1.3087 0.3433
1.2851 8.0 600 1.3020 0.3433
1.2991 9.0 675 1.2955 0.3433
1.3514 10.0 750 1.2893 0.345
1.256 11.0 825 1.2833 0.3417
1.2501 12.0 900 1.2778 0.3417
1.2581 13.0 975 1.2726 0.345
1.279 14.0 1050 1.2675 0.345
1.281 15.0 1125 1.2628 0.345
1.2242 16.0 1200 1.2582 0.3483
1.1785 17.0 1275 1.2539 0.3483
1.2882 18.0 1350 1.2497 0.35
1.2177 19.0 1425 1.2459 0.3533
1.1848 20.0 1500 1.2422 0.3567
1.2931 21.0 1575 1.2388 0.3583
1.2179 22.0 1650 1.2355 0.3567
1.2465 23.0 1725 1.2324 0.3567
1.2403 24.0 1800 1.2294 0.355
1.2116 25.0 1875 1.2267 0.355
1.2221 26.0 1950 1.2242 0.36
1.167 27.0 2025 1.2218 0.36
1.2147 28.0 2100 1.2195 0.3583
1.2367 29.0 2175 1.2174 0.355
1.2142 30.0 2250 1.2154 0.355
1.2312 31.0 2325 1.2136 0.3533
1.1773 32.0 2400 1.2119 0.3517
1.1658 33.0 2475 1.2103 0.3517
1.2038 34.0 2550 1.2088 0.355
1.1521 35.0 2625 1.2075 0.3567
1.1878 36.0 2700 1.2062 0.3633
1.2013 37.0 2775 1.2051 0.365
1.1943 38.0 2850 1.2041 0.3633
1.1839 39.0 2925 1.2032 0.3667
1.1836 40.0 3000 1.2024 0.3683
1.1971 41.0 3075 1.2017 0.3683
1.1901 42.0 3150 1.2011 0.365
1.2156 43.0 3225 1.2005 0.365
1.2062 44.0 3300 1.2001 0.365
1.1956 45.0 3375 1.1998 0.365
1.2469 46.0 3450 1.1995 0.3633
1.1737 47.0 3525 1.1993 0.3667
1.1496 48.0 3600 1.1992 0.3667
1.1899 49.0 3675 1.1991 0.3667
1.2185 50.0 3750 1.1991 0.3667

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0