wcosmas's picture
Model save
bdf8fc6 verified
|
raw
history blame
4.85 kB
metadata
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-base-patch16-224-in21k-finetuned-papsmear
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9338235294117647

vit-base-patch16-224-in21k-finetuned-papsmear

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2870
  • Accuracy: 0.9338

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9231 9 1.7879 0.2059
1.8037 1.9487 19 1.6485 0.4044
1.6961 2.9744 29 1.4882 0.3971
1.5407 4.0 39 1.3069 0.5221
1.3308 4.9231 48 1.1339 0.6029
1.1074 5.9487 58 0.9396 0.75
0.9162 6.9744 68 0.8551 0.7647
0.8174 8.0 78 0.8291 0.7574
0.7135 8.9231 87 0.7505 0.7941
0.6222 9.9487 97 0.6434 0.8456
0.5445 10.9744 107 0.5996 0.8529
0.4935 12.0 117 0.5514 0.8529
0.4131 12.9231 126 0.5029 0.8603
0.4012 13.9487 136 0.5566 0.8382
0.3689 14.9744 146 0.5533 0.8382
0.3533 16.0 156 0.4232 0.8971
0.2954 16.9231 165 0.4589 0.8897
0.2907 17.9487 175 0.4223 0.8971
0.2804 18.9744 185 0.4056 0.8971
0.2469 20.0 195 0.3904 0.9118
0.2643 20.9231 204 0.3866 0.9044
0.2212 21.9487 214 0.4173 0.875
0.2476 22.9744 224 0.6001 0.8015
0.2347 24.0 234 0.3900 0.9044
0.207 24.9231 243 0.4033 0.8897
0.1803 25.9487 253 0.3510 0.9265
0.1979 26.9744 263 0.3723 0.9191
0.1821 28.0 273 0.4320 0.8824
0.1992 28.9231 282 0.3557 0.9118
0.2154 29.9487 292 0.3362 0.9191
0.1801 30.9744 302 0.4358 0.875
0.1794 32.0 312 0.3500 0.9191
0.1566 32.9231 321 0.3046 0.9265
0.1432 33.9487 331 0.3239 0.9265
0.145 34.9744 341 0.3311 0.9338
0.1578 36.0 351 0.3029 0.9338
0.1511 36.9231 360 0.3010 0.9338
0.139 37.9487 370 0.2982 0.9265
0.1294 38.9744 380 0.3261 0.9191
0.1263 40.0 390 0.2932 0.9338
0.1263 40.9231 399 0.2944 0.9338
0.1216 41.9487 409 0.2867 0.9338
0.1199 42.9744 419 0.2887 0.9338
0.128 44.0 429 0.2825 0.9338
0.1115 44.9231 438 0.2880 0.9338
0.1179 45.9487 448 0.2871 0.9338
0.12 46.1538 450 0.2870 0.9338

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1