nickmuchi's picture
Librarian Bot: Add base_model information to model (#2)
6d88ee1
|
raw
history blame
3.13 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - image_folder
  - nielsr/eurosat-demo
metrics:
  - accuracy
widget:
  - src: https://drive.google.com/uc?id=1trKgvkMRQ3BB0VcqnDwmieLxXhWmS8rq
    example_title: Annual Crop
  - src: https://drive.google.com/uc?id=1kWQbPNHVa_JscS0age5E0UOSBcU1bh18
    example_title: Forest
  - src: https://drive.google.com/uc?id=12YbxF-MfpMqLPB91HuTPEgcg1xnZKhGP
    example_title: Herbaceous Vegetation
  - src: https://drive.google.com/uc?id=1NkzDiaQ1ciMDf89C8uA5zGx984bwkFCi
    example_title: Highway
  - src: https://drive.google.com/uc?id=1F6r7O0rlgzaPvY6XBpFOWUTIddEIUkxx
    example_title: Industrial
  - src: https://drive.google.com/uc?id=16zOtFHZ9E17jA9Ua4PsXrUjugSs77XKm
    example_title: Pasture
  - src: https://drive.google.com/uc?id=163tqIdoVY7WFtKQlpz_bPM9WjwbJAtd
    example_title: Permanent Crop
  - src: https://drive.google.com/uc?id=1qsX-XsrE3dMp7C7LLVa6HriaABIXuBrJ
    example_title: Residential
  - src: https://drive.google.com/uc?id=1UK2praQHbNXDnctJt58rrlQZu84lxyk
    example_title: River
  - src: https://drive.google.com/uc?id=1zVAfR7N5hXy6eq1cVOd8bXPjC1sqxVir
    example_title: Sea Lake
base_model: microsoft/swin-tiny-patch4-window7-224
model-index:
  - name: swin-tiny-patch4-window7-224-finetuned-eurosat
    results:
      - task:
          type: image-classification
          name: Image Classification
        dataset:
          name: image_folder
          type: image_folder
          args: default
        metrics:
          - type: accuracy
            value: 0.9848148148148148
            name: Accuracy

swin-tiny-patch4-window7-224-finetuned-eurosat

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the image_folder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0536
  • Accuracy: 0.9848

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2602 1.0 190 0.1310 0.9563
0.1975 2.0 380 0.1063 0.9637
0.142 3.0 570 0.0642 0.9767
0.1235 4.0 760 0.0560 0.9837
0.1019 5.0 950 0.0536 0.9848

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1