metadata
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-toolwear
results: []
segformer-b0-finetuned-segments-toolwear
This model is a fine-tuned version of nvidia/mit-b0 on the HorcruxNo13/toolwear_edges dataset. It achieves the following results on the evaluation set:
- Loss: 0.7517
- Mean Iou: 0.3530
- Mean Accuracy: 0.7066
- Overall Accuracy: 0.7444
- Accuracy Unlabeled: nan
- Accuracy Tool: 0.6653
- Accuracy Wear: 0.7480
- Iou Unlabeled: 0.0
- Iou Tool: 0.3188
- Iou Wear: 0.7403
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Tool | Accuracy Wear | Iou Unlabeled | Iou Tool | Iou Wear |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.8264 | 1.82 | 20 | 0.9929 | 0.3016 | 0.5119 | 0.6940 | nan | 0.3130 | 0.7109 | 0.0 | 0.2149 | 0.6899 |
0.566 | 3.64 | 40 | 0.8390 | 0.3172 | 0.6658 | 0.6561 | nan | 0.6765 | 0.6552 | 0.0 | 0.3052 | 0.6466 |
0.5515 | 5.45 | 60 | 0.7996 | 0.3015 | 0.7085 | 0.5831 | nan | 0.8455 | 0.5715 | 0.0 | 0.3365 | 0.5680 |
0.496 | 7.27 | 80 | 0.7495 | 0.3370 | 0.7783 | 0.6771 | nan | 0.8889 | 0.6676 | 0.0 | 0.3465 | 0.6645 |
0.4945 | 9.09 | 100 | 0.7214 | 0.3106 | 0.6966 | 0.6150 | nan | 0.7858 | 0.6074 | 0.0 | 0.3294 | 0.6025 |
0.4392 | 10.91 | 120 | 0.7105 | 0.3012 | 0.7519 | 0.5990 | nan | 0.9191 | 0.5848 | 0.0 | 0.3198 | 0.5839 |
0.3211 | 12.73 | 140 | 0.7570 | 0.3470 | 0.7008 | 0.7352 | nan | 0.6632 | 0.7384 | 0.0 | 0.3116 | 0.7292 |
0.2289 | 14.55 | 160 | 0.9477 | 0.3748 | 0.7214 | 0.7566 | nan | 0.6830 | 0.7598 | 0.0 | 0.3718 | 0.7527 |
0.4674 | 16.36 | 180 | 0.8172 | 0.3637 | 0.7442 | 0.7533 | nan | 0.7344 | 0.7541 | 0.0 | 0.3437 | 0.7476 |
0.3226 | 18.18 | 200 | 0.8199 | 0.3238 | 0.7286 | 0.6845 | nan | 0.7769 | 0.6804 | 0.0 | 0.2939 | 0.6777 |
0.1706 | 20.0 | 220 | 0.7336 | 0.3410 | 0.6894 | 0.7096 | nan | 0.6673 | 0.7115 | 0.0 | 0.3185 | 0.7044 |
0.2786 | 21.82 | 240 | 0.9254 | 0.3662 | 0.7577 | 0.7864 | nan | 0.7264 | 0.7891 | 0.0 | 0.3164 | 0.7821 |
0.1685 | 23.64 | 260 | 0.8291 | 0.3435 | 0.7685 | 0.7294 | nan | 0.8113 | 0.7258 | 0.0 | 0.3082 | 0.7224 |
0.1649 | 25.45 | 280 | 0.7200 | 0.3303 | 0.7133 | 0.6593 | nan | 0.7723 | 0.6543 | 0.0 | 0.3394 | 0.6516 |
0.1481 | 27.27 | 300 | 0.8155 | 0.3531 | 0.7558 | 0.7434 | nan | 0.7695 | 0.7422 | 0.0 | 0.3206 | 0.7385 |
0.1476 | 29.09 | 320 | 0.7374 | 0.3455 | 0.6734 | 0.7252 | nan | 0.6169 | 0.7300 | 0.0 | 0.3153 | 0.7211 |
0.2284 | 30.91 | 340 | 0.7254 | 0.3265 | 0.6989 | 0.6766 | nan | 0.7233 | 0.6745 | 0.0 | 0.3099 | 0.6695 |
0.1212 | 32.73 | 360 | 0.8022 | 0.3591 | 0.7252 | 0.7662 | nan | 0.6804 | 0.7700 | 0.0 | 0.3153 | 0.7620 |
0.1284 | 34.55 | 380 | 0.7345 | 0.3449 | 0.7044 | 0.7331 | nan | 0.6731 | 0.7357 | 0.0 | 0.3062 | 0.7284 |
0.1685 | 36.36 | 400 | 0.7581 | 0.3275 | 0.7357 | 0.6991 | nan | 0.7757 | 0.6957 | 0.0 | 0.2910 | 0.6915 |
0.1018 | 38.18 | 420 | 0.7303 | 0.3401 | 0.6575 | 0.7173 | nan | 0.5921 | 0.7228 | 0.0 | 0.3069 | 0.7133 |
0.1405 | 40.0 | 440 | 0.7375 | 0.3555 | 0.7301 | 0.7475 | nan | 0.7111 | 0.7491 | 0.0 | 0.3234 | 0.7431 |
0.08 | 41.82 | 460 | 0.7449 | 0.3561 | 0.7047 | 0.7457 | nan | 0.6598 | 0.7495 | 0.0 | 0.3265 | 0.7417 |
0.1311 | 43.64 | 480 | 0.7680 | 0.3552 | 0.7205 | 0.7444 | nan | 0.6945 | 0.7466 | 0.0 | 0.3257 | 0.7398 |
0.1235 | 45.45 | 500 | 0.7589 | 0.3523 | 0.7117 | 0.7398 | nan | 0.6811 | 0.7424 | 0.0 | 0.3218 | 0.7352 |
0.1169 | 47.27 | 520 | 0.7676 | 0.3535 | 0.6952 | 0.7529 | nan | 0.6320 | 0.7583 | 0.0 | 0.3110 | 0.7494 |
0.14 | 49.09 | 540 | 0.7517 | 0.3530 | 0.7066 | 0.7444 | nan | 0.6653 | 0.7480 | 0.0 | 0.3188 | 0.7403 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3