File size: 8,635 Bytes
b3fd691 c0c4e6e b3fd691 e4d595c b3fd691 e4d595c b3fd691 e4d595c b3fd691 0bd5730 b3fd691 e4d595c b3fd691 e4d595c b3fd691 0bd5730 b3fd691 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-toolwear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-toolwear
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co./nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0799
- Mean Iou: 0.4629
- Mean Accuracy: 0.9258
- Overall Accuracy: 0.9258
- Accuracy Unlabeled: nan
- Accuracy Liver: 0.9258
- Iou Unlabeled: 0.0
- Iou Liver: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Liver | Iou Unlabeled | Iou Liver |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:-------------:|:---------:|
| 0.2837 | 0.8 | 20 | 0.3699 | 0.3876 | 0.7752 | 0.7752 | nan | 0.7752 | 0.0 | 0.7752 |
| 0.2264 | 1.6 | 40 | 0.1982 | 0.4222 | 0.8444 | 0.8444 | nan | 0.8444 | 0.0 | 0.8444 |
| 0.1687 | 2.4 | 60 | 0.1594 | 0.3988 | 0.7977 | 0.7977 | nan | 0.7977 | 0.0 | 0.7977 |
| 0.1489 | 3.2 | 80 | 0.1396 | 0.4050 | 0.8100 | 0.8100 | nan | 0.8100 | 0.0 | 0.8100 |
| 0.1111 | 4.0 | 100 | 0.1203 | 0.4223 | 0.8446 | 0.8446 | nan | 0.8446 | 0.0 | 0.8446 |
| 0.1115 | 4.8 | 120 | 0.1160 | 0.4512 | 0.9023 | 0.9023 | nan | 0.9023 | 0.0 | 0.9023 |
| 0.1081 | 5.6 | 140 | 0.1053 | 0.4504 | 0.9009 | 0.9009 | nan | 0.9009 | 0.0 | 0.9009 |
| 0.1111 | 6.4 | 160 | 0.0960 | 0.4526 | 0.9051 | 0.9051 | nan | 0.9051 | 0.0 | 0.9051 |
| 0.0904 | 7.2 | 180 | 0.0954 | 0.4646 | 0.9292 | 0.9292 | nan | 0.9292 | 0.0 | 0.9292 |
| 0.0868 | 8.0 | 200 | 0.0925 | 0.4593 | 0.9187 | 0.9187 | nan | 0.9187 | 0.0 | 0.9187 |
| 0.092 | 8.8 | 220 | 0.0852 | 0.4630 | 0.9261 | 0.9261 | nan | 0.9261 | 0.0 | 0.9261 |
| 0.0686 | 9.6 | 240 | 0.0897 | 0.4631 | 0.9263 | 0.9263 | nan | 0.9263 | 0.0 | 0.9263 |
| 0.0684 | 10.4 | 260 | 0.0939 | 0.4727 | 0.9455 | 0.9455 | nan | 0.9455 | 0.0 | 0.9455 |
| 0.0634 | 11.2 | 280 | 0.0919 | 0.4241 | 0.8483 | 0.8483 | nan | 0.8483 | 0.0 | 0.8483 |
| 0.059 | 12.0 | 300 | 0.0886 | 0.4727 | 0.9455 | 0.9455 | nan | 0.9455 | 0.0 | 0.9455 |
| 0.052 | 12.8 | 320 | 0.0764 | 0.4554 | 0.9108 | 0.9108 | nan | 0.9108 | 0.0 | 0.9108 |
| 0.0558 | 13.6 | 340 | 0.0769 | 0.4629 | 0.9258 | 0.9258 | nan | 0.9258 | 0.0 | 0.9258 |
| 0.0594 | 14.4 | 360 | 0.0770 | 0.4616 | 0.9231 | 0.9231 | nan | 0.9231 | 0.0 | 0.9231 |
| 0.0641 | 15.2 | 380 | 0.0844 | 0.4709 | 0.9417 | 0.9417 | nan | 0.9417 | 0.0 | 0.9417 |
| 0.0645 | 16.0 | 400 | 0.0790 | 0.4632 | 0.9263 | 0.9263 | nan | 0.9263 | 0.0 | 0.9263 |
| 0.0545 | 16.8 | 420 | 0.0776 | 0.4610 | 0.9220 | 0.9220 | nan | 0.9220 | 0.0 | 0.9220 |
| 0.056 | 17.6 | 440 | 0.0780 | 0.4541 | 0.9082 | 0.9082 | nan | 0.9082 | 0.0 | 0.9082 |
| 0.0472 | 18.4 | 460 | 0.0742 | 0.4595 | 0.9189 | 0.9189 | nan | 0.9189 | 0.0 | 0.9189 |
| 0.0478 | 19.2 | 480 | 0.0806 | 0.4690 | 0.9380 | 0.9380 | nan | 0.9380 | 0.0 | 0.9380 |
| 0.0523 | 20.0 | 500 | 0.0741 | 0.4550 | 0.9100 | 0.9100 | nan | 0.9100 | 0.0 | 0.9100 |
| 0.0401 | 20.8 | 520 | 0.0794 | 0.4637 | 0.9274 | 0.9274 | nan | 0.9274 | 0.0 | 0.9274 |
| 0.041 | 21.6 | 540 | 0.0772 | 0.4631 | 0.9262 | 0.9262 | nan | 0.9262 | 0.0 | 0.9262 |
| 0.0386 | 22.4 | 560 | 0.0795 | 0.4620 | 0.9240 | 0.9240 | nan | 0.9240 | 0.0 | 0.9240 |
| 0.0386 | 23.2 | 580 | 0.0761 | 0.4616 | 0.9232 | 0.9232 | nan | 0.9232 | 0.0 | 0.9232 |
| 0.0628 | 24.0 | 600 | 0.0778 | 0.4636 | 0.9271 | 0.9271 | nan | 0.9271 | 0.0 | 0.9271 |
| 0.0387 | 24.8 | 620 | 0.0782 | 0.4613 | 0.9226 | 0.9226 | nan | 0.9226 | 0.0 | 0.9226 |
| 0.0422 | 25.6 | 640 | 0.0778 | 0.4616 | 0.9233 | 0.9233 | nan | 0.9233 | 0.0 | 0.9233 |
| 0.0392 | 26.4 | 660 | 0.0838 | 0.4696 | 0.9393 | 0.9393 | nan | 0.9393 | 0.0 | 0.9393 |
| 0.04 | 27.2 | 680 | 0.0809 | 0.4658 | 0.9315 | 0.9315 | nan | 0.9315 | 0.0 | 0.9315 |
| 0.0341 | 28.0 | 700 | 0.0822 | 0.4667 | 0.9335 | 0.9335 | nan | 0.9335 | 0.0 | 0.9335 |
| 0.0329 | 28.8 | 720 | 0.0797 | 0.4639 | 0.9278 | 0.9278 | nan | 0.9278 | 0.0 | 0.9278 |
| 0.0373 | 29.6 | 740 | 0.0769 | 0.4582 | 0.9163 | 0.9163 | nan | 0.9163 | 0.0 | 0.9163 |
| 0.0366 | 30.4 | 760 | 0.0804 | 0.4632 | 0.9264 | 0.9264 | nan | 0.9264 | 0.0 | 0.9264 |
| 0.0432 | 31.2 | 780 | 0.0793 | 0.4587 | 0.9174 | 0.9174 | nan | 0.9174 | 0.0 | 0.9174 |
| 0.0328 | 32.0 | 800 | 0.0838 | 0.4688 | 0.9377 | 0.9377 | nan | 0.9377 | 0.0 | 0.9377 |
| 0.0377 | 32.8 | 820 | 0.0805 | 0.4643 | 0.9286 | 0.9286 | nan | 0.9286 | 0.0 | 0.9286 |
| 0.0327 | 33.6 | 840 | 0.0784 | 0.4614 | 0.9228 | 0.9228 | nan | 0.9228 | 0.0 | 0.9228 |
| 0.032 | 34.4 | 860 | 0.0799 | 0.4629 | 0.9258 | 0.9258 | nan | 0.9258 | 0.0 | 0.9258 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|