metadata
license: other
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-toolwear
results: []
segformer-b0-finetuned-segments-toolwear
This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1501
- Mean Iou: 0.4560
- Mean Accuracy: 0.9040
- Overall Accuracy: 0.9643
- Accuracy Unlabeled: nan
- Accuracy Wear: 0.8404
- Accuracy Tool: 0.9675
- Iou Unlabeled: 0.0
- Iou Wear: 0.4034
- Iou Tool: 0.9646
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Wear | Accuracy Tool | Iou Unlabeled | Iou Wear | Iou Tool |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4464 | 1.82 | 20 | 0.6527 | 0.3325 | 0.5116 | 0.9740 | nan | 0.0242 | 0.9990 | 0.0 | 0.0235 | 0.9740 |
0.3069 | 3.64 | 40 | 0.3300 | 0.4958 | 0.8505 | 0.9661 | nan | 0.7288 | 0.9723 | 0.0 | 0.5213 | 0.9662 |
0.276 | 5.45 | 60 | 0.2597 | 0.4089 | 0.9324 | 0.9368 | nan | 0.9278 | 0.9370 | 0.0 | 0.2909 | 0.9358 |
0.2648 | 7.27 | 80 | 0.2321 | 0.4338 | 0.8839 | 0.9567 | nan | 0.8071 | 0.9607 | 0.0 | 0.3441 | 0.9572 |
0.245 | 9.09 | 100 | 0.2298 | 0.4021 | 0.9265 | 0.9359 | nan | 0.9167 | 0.9364 | 0.0 | 0.2715 | 0.9348 |
0.2047 | 10.91 | 120 | 0.1897 | 0.4379 | 0.8814 | 0.9446 | nan | 0.8147 | 0.9480 | 0.0 | 0.3684 | 0.9455 |
0.1695 | 12.73 | 140 | 0.1681 | 0.4561 | 0.8444 | 0.9636 | nan | 0.7188 | 0.9701 | 0.0 | 0.4026 | 0.9657 |
0.1556 | 14.55 | 160 | 0.1741 | 0.4289 | 0.9060 | 0.9494 | nan | 0.8603 | 0.9517 | 0.0 | 0.3372 | 0.9497 |
0.1435 | 16.36 | 180 | 0.1528 | 0.4746 | 0.8851 | 0.9679 | nan | 0.7978 | 0.9723 | 0.0 | 0.4549 | 0.9689 |
0.1208 | 18.18 | 200 | 0.1648 | 0.4379 | 0.9126 | 0.9577 | nan | 0.8650 | 0.9601 | 0.0 | 0.3560 | 0.9577 |
0.1425 | 20.0 | 220 | 0.1587 | 0.4451 | 0.9116 | 0.9576 | nan | 0.8631 | 0.9601 | 0.0 | 0.3774 | 0.9578 |
0.1124 | 21.82 | 240 | 0.1515 | 0.4291 | 0.9044 | 0.9491 | nan | 0.8574 | 0.9515 | 0.0 | 0.3380 | 0.9493 |
0.1509 | 23.64 | 260 | 0.1501 | 0.4560 | 0.9040 | 0.9643 | nan | 0.8404 | 0.9675 | 0.0 | 0.4034 | 0.9646 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3