HorcruxNo13's picture
update model card README.md
bc85fee
|
raw
history blame
6.84 kB
metadata
license: other
tags:
  - vision
  - image-segmentation
  - generated_from_trainer
model-index:
  - name: segformer-b0-finetuned-segments-toolwear
    results: []

segformer-b0-finetuned-segments-toolwear

This model is a fine-tuned version of nvidia/mit-b0 on the HorcruxNo13/toolwear_segmentsai dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1285
  • Mean Iou: 0.3499
  • Mean Accuracy: 0.6998
  • Overall Accuracy: 0.6998
  • Accuracy Unlabeled: nan
  • Accuracy Tool: nan
  • Accuracy Wear: 0.6998
  • Iou Unlabeled: 0.0
  • Iou Tool: nan
  • Iou Wear: 0.6998

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Tool Accuracy Wear Iou Unlabeled Iou Tool Iou Wear
0.8959 1.82 20 0.8677 0.4048 0.8097 0.8097 nan nan 0.8097 0.0 nan 0.8097
0.6658 3.64 40 0.6010 0.3734 0.7468 0.7468 nan nan 0.7468 0.0 nan 0.7468
0.4389 5.45 60 0.4941 0.3634 0.7269 0.7269 nan nan 0.7269 0.0 nan 0.7269
0.3531 7.27 80 0.4390 0.3508 0.7015 0.7015 nan nan 0.7015 0.0 nan 0.7015
0.3408 9.09 100 0.3753 0.3340 0.6679 0.6679 nan nan 0.6679 0.0 nan 0.6679
0.3266 10.91 120 0.3769 0.3761 0.7521 0.7521 nan nan 0.7521 0.0 nan 0.7521
0.2791 12.73 140 0.3491 0.3918 0.7835 0.7835 nan nan 0.7835 0.0 nan 0.7835
0.2066 14.55 160 0.2705 0.3491 0.6981 0.6981 nan nan 0.6981 0.0 nan 0.6981
0.161 16.36 180 0.2398 0.3283 0.6567 0.6567 nan nan 0.6567 0.0 nan 0.6567
0.1558 18.18 200 0.2599 0.4021 0.8042 0.8042 nan nan 0.8042 0.0 nan 0.8042
0.128 20.0 220 0.2163 0.3387 0.6775 0.6775 nan nan 0.6775 0.0 nan 0.6775
0.11 21.82 240 0.2019 0.3599 0.7199 0.7199 nan nan 0.7199 0.0 nan 0.7199
0.1101 23.64 260 0.1905 0.3620 0.7240 0.7240 nan nan 0.7240 0.0 nan 0.7240
0.0874 25.45 280 0.1708 0.3138 0.6276 0.6276 nan nan 0.6276 0.0 nan 0.6276
0.0815 27.27 300 0.1505 0.3191 0.6382 0.6382 nan nan 0.6382 0.0 nan 0.6382
0.082 29.09 320 0.1641 0.3520 0.7040 0.7040 nan nan 0.7040 0.0 nan 0.7040
0.0694 30.91 340 0.1456 0.3322 0.6644 0.6644 nan nan 0.6644 0.0 nan 0.6644
0.072 32.73 360 0.1416 0.3445 0.6889 0.6889 nan nan 0.6889 0.0 nan 0.6889
0.065 34.55 380 0.1348 0.3407 0.6814 0.6814 nan nan 0.6814 0.0 nan 0.6814
0.0696 36.36 400 0.1372 0.3285 0.6569 0.6569 nan nan 0.6569 0.0 nan 0.6569
0.0666 38.18 420 0.1430 0.3636 0.7272 0.7272 nan nan 0.7272 0.0 nan 0.7272
0.0601 40.0 440 0.1222 0.3211 0.6423 0.6423 nan nan 0.6423 0.0 nan 0.6423
0.0515 41.82 460 0.1225 0.3286 0.6572 0.6572 nan nan 0.6572 0.0 nan 0.6572
0.0558 43.64 480 0.1229 0.3375 0.6750 0.6750 nan nan 0.6750 0.0 nan 0.6750
0.07 45.45 500 0.1111 0.3057 0.6114 0.6114 nan nan 0.6114 0.0 nan 0.6114
0.0606 47.27 520 0.1251 0.3391 0.6782 0.6782 nan nan 0.6782 0.0 nan 0.6782
0.0561 49.09 540 0.1285 0.3499 0.6998 0.6998 nan nan 0.6998 0.0 nan 0.6998

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3