File size: 5,352 Bytes
8b3caad cf426d0 8b3caad 8f83e09 8b3caad cf426d0 8b3caad cf426d0 8b3caad cf426d0 8b3caad 9aa7ed6 12c4740 e61aabc 12c4740 e61aabc 8b3caad 6c16d31 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
license: apache-2.0
base_model: projecte-aina/roberta-base-ca-v2-cased-te
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: 080524_epoch_5
results: []
pipeline_tag: zero-shot-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 080524_epoch_5
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co./projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5972
- Accuracy: 0.8445
- Precision: 0.8448
- Recall: 0.8445
- F1: 0.8445
- Ratio: 0.4874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 2
- seed: 47
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 0.4518 | 0.1626 | 10 | 0.6633 | 0.8361 | 0.8469 | 0.8361 | 0.8348 | 0.4118 |
| 0.4418 | 0.3252 | 20 | 0.6798 | 0.8277 | 0.8279 | 0.8277 | 0.8277 | 0.5126 |
| 0.5709 | 0.4878 | 30 | 0.7447 | 0.8193 | 0.8367 | 0.8193 | 0.8170 | 0.3866 |
| 0.6645 | 0.6504 | 40 | 0.6229 | 0.8487 | 0.8487 | 0.8487 | 0.8487 | 0.5 |
| 0.6606 | 0.8130 | 50 | 0.6014 | 0.8445 | 0.8446 | 0.8445 | 0.8445 | 0.5042 |
| 0.5763 | 0.9756 | 60 | 0.5972 | 0.8445 | 0.8448 | 0.8445 | 0.8445 | 0.4874 |
precision recall f1-score top1-score top2-score top3-score good1-score good2-score support
0 Aigua 0.632 0.545 0.585 0.545 0.818 0.955 0.955 0.955 22
1 Consum, comerç i mercats 0.103 0.571 0.174 0.571 0.714 0.857 0.714 0.714 7
2 Cultura 0.500 0.750 0.600 0.750 0.750 0.750 0.750 0.750 8
3 Economia 0.211 0.500 0.296 0.500 0.875 1.000 0.875 0.875 8
4 Educació 0.438 0.636 0.519 0.636 0.818 1.000 1.000 1.000 11
5 Enllumenat públic 0.833 0.851 0.842 0.851 0.936 0.979 0.979 0.979 47
6 Esports 0.562 0.750 0.643 0.750 0.917 1.000 1.000 1.000 12
7 Habitatge 0.208 0.385 0.270 0.385 0.615 0.923 0.692 0.846 13
8 Horta 0.000 0.000 0.000 0.000 0.444 0.556 0.556 0.556 9
9 Medi ambient i jardins 0.429 0.559 0.485 0.559 0.729 0.915 0.915 0.915 59
10 Neteja de la via pública 0.686 0.238 0.353 0.238 0.505 0.772 0.762 0.762 101
11 Salut pública 0.135 0.292 0.184 0.292 0.708 0.792 0.708 0.708 24
12 Seguretat ciutadana i incivisme 0.727 0.471 0.571 0.471 0.588 0.765 0.706 0.706 34
13 Serveis socials 0.333 0.667 0.444 0.667 0.889 0.889 0.889 0.889 9
14 Tràmits 0.395 0.395 0.395 0.395 0.884 0.907 0.907 0.907 43
15 Urbanisme 0.379 0.172 0.237 0.172 0.453 0.641 0.578 0.578 64
16 Via pública i mobilitat 0.778 0.778 0.778 0.778 0.846 0.889 0.864 0.867 279
macro avg 0.432 0.504 0.434 0.504 0.735 0.858 0.815 0.824 750
weighted avg 0.610 0.557 0.559 0.557 0.739 0.853 0.825 0.829 750
accuracy 0.557
error rate 0.443
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |