Edit model card

all-observation-type

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the all-multi-class dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0077
  • F1: 0.0913

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1
0.0726 1.1628 100 0.0660 0.0
0.0264 2.3256 200 0.0247 0.0
0.0161 3.4884 300 0.0165 0.0
0.0133 4.6512 400 0.0135 0.0
0.0124 5.8140 500 0.0120 0.0
0.011 6.9767 600 0.0112 0.0
0.0114 8.1395 700 0.0107 0.0
0.0109 9.3023 800 0.0103 0.0
0.0096 10.4651 900 0.0102 0.0
0.0099 11.6279 1000 0.0098 0.0
0.0089 12.7907 1100 0.0094 0.0
0.0091 13.9535 1200 0.0093 0.0
0.0081 15.1163 1300 0.0089 0.0
0.0073 16.2791 1400 0.0089 0.0
0.0071 17.4419 1500 0.0085 0.0
0.0068 18.6047 1600 0.0082 0.0183
0.0064 19.7674 1700 0.0082 0.0365
0.0061 20.9302 1800 0.0086 0.0091
0.0054 22.0930 1900 0.0082 0.0594
0.0051 23.2558 2000 0.0080 0.0502
0.0048 24.4186 2100 0.0079 0.0639
0.0045 25.5814 2200 0.0080 0.0639
0.0036 26.7442 2300 0.0079 0.1027
0.0038 27.9070 2400 0.0079 0.1027
0.0032 29.0698 2500 0.0077 0.0913
0.004 30.2326 2600 0.0079 0.1027
0.003 31.3953 2700 0.0081 0.0936
0.0029 32.5581 2800 0.0080 0.0890
0.0033 33.7209 2900 0.0081 0.0845
0.0029 34.8837 3000 0.0081 0.1256
0.0025 36.0465 3100 0.0081 0.1347
0.0027 37.2093 3200 0.0081 0.1324
0.0028 38.3721 3300 0.0082 0.1324
0.0023 39.5349 3400 0.0082 0.1324

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
173
Safetensors
Model size
86.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for shevek/all-observation-type

Finetuned
(1693)
this model