HorcruxNo13's picture
update model card README.md
c28ef8c
|
raw
history blame
6.78 kB
metadata
license: other
tags:
  - generated_from_trainer
model-index:
  - name: segformer-b0-finetuned-segments-toolwear
    results: []

segformer-b0-finetuned-segments-toolwear

This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0517
  • Mean Iou: 0.3741
  • Mean Accuracy: 0.7482
  • Overall Accuracy: 0.7482
  • Accuracy Unlabeled: nan
  • Accuracy Tool: nan
  • Accuracy Wear: 0.7482
  • Iou Unlabeled: 0.0
  • Iou Tool: nan
  • Iou Wear: 0.7482

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Tool Accuracy Wear Iou Unlabeled Iou Tool Iou Wear
0.8497 1.82 20 0.8647 0.4917 0.9834 0.9834 nan nan 0.9834 0.0 nan 0.9834
0.6095 3.64 40 0.5158 0.4642 0.9283 0.9283 nan nan 0.9283 0.0 nan 0.9283
0.4377 5.45 60 0.4200 0.4646 0.9291 0.9291 nan nan 0.9291 0.0 nan 0.9291
0.3756 7.27 80 0.3535 0.4780 0.9560 0.9560 nan nan 0.9560 0.0 nan 0.9560
0.4256 9.09 100 0.2951 0.4873 0.9746 0.9746 nan nan 0.9746 0.0 nan 0.9746
0.2748 10.91 120 0.2500 0.4817 0.9634 0.9634 nan nan 0.9634 0.0 nan 0.9634
0.2347 12.73 140 0.2000 0.4065 0.8129 0.8129 nan nan 0.8129 0.0 nan 0.8129
0.1777 14.55 160 0.1651 0.4340 0.8680 0.8680 nan nan 0.8680 0.0 nan 0.8680
0.186 16.36 180 0.1530 0.4211 0.8422 0.8422 nan nan 0.8422 0.0 nan 0.8422
0.1652 18.18 200 0.1143 0.4304 0.8608 0.8608 nan nan 0.8608 0.0 nan 0.8608
0.1227 20.0 220 0.1436 0.4838 0.9676 0.9676 nan nan 0.9676 0.0 nan 0.9676
0.1111 21.82 240 0.1014 0.3994 0.7988 0.7988 nan nan 0.7988 0.0 nan 0.7988
0.0989 23.64 260 0.0914 0.3574 0.7147 0.7147 nan nan 0.7147 0.0 nan 0.7147
0.1051 25.45 280 0.0871 0.2844 0.5689 0.5689 nan nan 0.5689 0.0 nan 0.5689
0.0975 27.27 300 0.0679 0.3893 0.7786 0.7786 nan nan 0.7786 0.0 nan 0.7786
0.0928 29.09 320 0.0723 0.4241 0.8483 0.8483 nan nan 0.8483 0.0 nan 0.8483
0.0673 30.91 340 0.0653 0.3628 0.7255 0.7255 nan nan 0.7255 0.0 nan 0.7255
0.0652 32.73 360 0.0641 0.4023 0.8047 0.8047 nan nan 0.8047 0.0 nan 0.8047
0.0912 34.55 380 0.0734 0.4453 0.8906 0.8906 nan nan 0.8906 0.0 nan 0.8906
0.0682 36.36 400 0.0609 0.3322 0.6644 0.6644 nan nan 0.6644 0.0 nan 0.6644
0.0737 38.18 420 0.0619 0.4053 0.8107 0.8107 nan nan 0.8107 0.0 nan 0.8107
0.06 40.0 440 0.0564 0.3593 0.7186 0.7186 nan nan 0.7186 0.0 nan 0.7186
0.0555 41.82 460 0.0562 0.4025 0.8050 0.8050 nan nan 0.8050 0.0 nan 0.8050
0.063 43.64 480 0.0550 0.3945 0.7891 0.7891 nan nan 0.7891 0.0 nan 0.7891
0.0641 45.45 500 0.0554 0.4032 0.8065 0.8065 nan nan 0.8065 0.0 nan 0.8065
0.0739 47.27 520 0.0549 0.3880 0.7760 0.7760 nan nan 0.7760 0.0 nan 0.7760
0.0684 49.09 540 0.0517 0.3741 0.7482 0.7482 nan nan 0.7482 0.0 nan 0.7482

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3