wcosmas's picture
Model save
06cb986 verified
|
raw
history blame
4.85 kB
metadata
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-base-patch16-224-in21k-finetuned-papsmear
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8897058823529411

vit-base-patch16-224-in21k-finetuned-papsmear

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3853
  • Accuracy: 0.8897

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9231 9 1.7589 0.2426
1.7862 1.9487 19 1.5880 0.3824
1.6727 2.9744 29 1.4212 0.4265
1.5102 4.0 39 1.2241 0.5809
1.3247 4.9231 48 1.0906 0.6103
1.1047 5.9487 58 0.9747 0.6765
0.9405 6.9744 68 0.8745 0.7426
0.823 8.0 78 0.7833 0.7426
0.7244 8.9231 87 0.7160 0.7794
0.6367 9.9487 97 0.7328 0.7794
0.5537 10.9744 107 0.6573 0.7868
0.484 12.0 117 0.5988 0.8088
0.4642 12.9231 126 0.6268 0.7941
0.4166 13.9487 136 0.6549 0.7794
0.4106 14.9744 146 0.5330 0.8529
0.3947 16.0 156 0.5134 0.8382
0.3469 16.9231 165 0.5879 0.7794
0.3151 17.9487 175 0.5683 0.8382
0.2946 18.9744 185 0.5383 0.8162
0.2927 20.0 195 0.5682 0.8162
0.2879 20.9231 204 0.4722 0.8603
0.2512 21.9487 214 0.4806 0.8456
0.2633 22.9744 224 0.4713 0.8456
0.2286 24.0 234 0.5167 0.8382
0.2265 24.9231 243 0.3886 0.8824
0.2107 25.9487 253 0.4396 0.8676
0.2044 26.9744 263 0.4734 0.8456
0.1925 28.0 273 0.4606 0.8529
0.1866 28.9231 282 0.5061 0.8309
0.1928 29.9487 292 0.4202 0.8824
0.1907 30.9744 302 0.5120 0.8309
0.1631 32.0 312 0.4165 0.8676
0.1654 32.9231 321 0.4600 0.8676
0.154 33.9487 331 0.3834 0.8971
0.1459 34.9744 341 0.3686 0.8897
0.1452 36.0 351 0.4174 0.8676
0.1548 36.9231 360 0.3791 0.9044
0.1395 37.9487 370 0.4512 0.8529
0.1333 38.9744 380 0.3775 0.8897
0.1236 40.0 390 0.3666 0.8971
0.1236 40.9231 399 0.3892 0.8971
0.1314 41.9487 409 0.3832 0.8897
0.1322 42.9744 419 0.3919 0.8824
0.1156 44.0 429 0.3699 0.8971
0.1222 44.9231 438 0.3828 0.8971
0.1254 45.9487 448 0.3853 0.8897
0.1129 46.1538 450 0.3853 0.8897

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1