|
--- |
|
license: other |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: segformer-b0-finetuned-segments-toolwear |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# segformer-b0-finetuned-segments-toolwear |
|
|
|
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co./nvidia/mit-b0) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.0341 |
|
- Mean Iou: 0.4939 |
|
- Mean Accuracy: 0.9878 |
|
- Overall Accuracy: 0.9878 |
|
- Accuracy Unlabeled: nan |
|
- Accuracy Tool: 0.9878 |
|
- Iou Unlabeled: 0.0 |
|
- Iou Tool: 0.9878 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 6e-05 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 2 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 50 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Tool | Iou Unlabeled | Iou Tool | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-------------:|:-------------:|:--------:| |
|
| 0.2127 | 1.82 | 20 | 0.3537 | 0.4996 | 0.9991 | 0.9991 | nan | 0.9991 | 0.0 | 0.9991 | |
|
| 0.2095 | 3.64 | 40 | 0.1407 | 0.4987 | 0.9974 | 0.9974 | nan | 0.9974 | 0.0 | 0.9974 | |
|
| 0.1253 | 5.45 | 60 | 0.1011 | 0.4970 | 0.9940 | 0.9940 | nan | 0.9940 | 0.0 | 0.9940 | |
|
| 0.0812 | 7.27 | 80 | 0.0821 | 0.4957 | 0.9914 | 0.9914 | nan | 0.9914 | 0.0 | 0.9914 | |
|
| 0.0841 | 9.09 | 100 | 0.0652 | 0.4926 | 0.9851 | 0.9851 | nan | 0.9851 | 0.0 | 0.9851 | |
|
| 0.0574 | 10.91 | 120 | 0.0612 | 0.4930 | 0.9861 | 0.9861 | nan | 0.9861 | 0.0 | 0.9861 | |
|
| 0.047 | 12.73 | 140 | 0.0562 | 0.4940 | 0.9880 | 0.9880 | nan | 0.9880 | 0.0 | 0.9880 | |
|
| 0.0542 | 14.55 | 160 | 0.0488 | 0.4937 | 0.9874 | 0.9874 | nan | 0.9874 | 0.0 | 0.9874 | |
|
| 0.0405 | 16.36 | 180 | 0.0487 | 0.4958 | 0.9916 | 0.9916 | nan | 0.9916 | 0.0 | 0.9916 | |
|
| 0.045 | 18.18 | 200 | 0.0484 | 0.4964 | 0.9929 | 0.9929 | nan | 0.9929 | 0.0 | 0.9929 | |
|
| 0.0487 | 20.0 | 220 | 0.0412 | 0.4936 | 0.9873 | 0.9873 | nan | 0.9873 | 0.0 | 0.9873 | |
|
| 0.0417 | 21.82 | 240 | 0.0397 | 0.4936 | 0.9872 | 0.9872 | nan | 0.9872 | 0.0 | 0.9872 | |
|
| 0.0525 | 23.64 | 260 | 0.0393 | 0.4934 | 0.9868 | 0.9868 | nan | 0.9868 | 0.0 | 0.9868 | |
|
| 0.0425 | 25.45 | 280 | 0.0381 | 0.4930 | 0.9861 | 0.9861 | nan | 0.9861 | 0.0 | 0.9861 | |
|
| 0.0386 | 27.27 | 300 | 0.0393 | 0.4927 | 0.9855 | 0.9855 | nan | 0.9855 | 0.0 | 0.9855 | |
|
| 0.0239 | 29.09 | 320 | 0.0387 | 0.4933 | 0.9866 | 0.9866 | nan | 0.9866 | 0.0 | 0.9866 | |
|
| 0.0279 | 30.91 | 340 | 0.0369 | 0.4941 | 0.9882 | 0.9882 | nan | 0.9882 | 0.0 | 0.9882 | |
|
| 0.0194 | 32.73 | 360 | 0.0368 | 0.4916 | 0.9832 | 0.9832 | nan | 0.9832 | 0.0 | 0.9832 | |
|
| 0.0238 | 34.55 | 380 | 0.0370 | 0.4937 | 0.9874 | 0.9874 | nan | 0.9874 | 0.0 | 0.9874 | |
|
| 0.0281 | 36.36 | 400 | 0.0347 | 0.4930 | 0.9859 | 0.9859 | nan | 0.9859 | 0.0 | 0.9859 | |
|
| 0.0218 | 38.18 | 420 | 0.0351 | 0.4924 | 0.9848 | 0.9848 | nan | 0.9848 | 0.0 | 0.9848 | |
|
| 0.0197 | 40.0 | 440 | 0.0354 | 0.4932 | 0.9864 | 0.9864 | nan | 0.9864 | 0.0 | 0.9864 | |
|
| 0.0197 | 41.82 | 460 | 0.0343 | 0.4933 | 0.9865 | 0.9865 | nan | 0.9865 | 0.0 | 0.9865 | |
|
| 0.0231 | 43.64 | 480 | 0.0345 | 0.4931 | 0.9862 | 0.9862 | nan | 0.9862 | 0.0 | 0.9862 | |
|
| 0.0223 | 45.45 | 500 | 0.0346 | 0.4938 | 0.9875 | 0.9875 | nan | 0.9875 | 0.0 | 0.9875 | |
|
| 0.0184 | 47.27 | 520 | 0.0340 | 0.4927 | 0.9854 | 0.9854 | nan | 0.9854 | 0.0 | 0.9854 | |
|
| 0.0202 | 49.09 | 540 | 0.0341 | 0.4939 | 0.9878 | 0.9878 | nan | 0.9878 | 0.0 | 0.9878 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.28.0 |
|
- Pytorch 2.0.1+cu118 |
|
- Datasets 2.14.5 |
|
- Tokenizers 0.13.3 |
|
|