File size: 7,007 Bytes
4360e06 73db002 4360e06 f9654aa b076fc7 f9654aa 4360e06 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
license: mit
base_model: BAAI/bge-base-en-v1.5
tags:
- generated_from_trainer
model-index:
- name: IKI-Category-multilabel_bge
results: []
co2_eq_emissions:
emissions: 47.3697569372214
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) CPU @ 2.30GHz
ram_total_size: 12.674781799316406
hours_used: 0.996
hardware_used: 1 x Tesla T4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IKI-Category-multilabel_bge
This model is a fine-tuned version of [BAAI/bge-base-en-v1.5](https://huggingface.co./BAAI/bge-base-en-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4541
- Precision-micro: 0.75
- Precision-samples: 0.7708
- Precision-weighted: 0.7517
- Recall-micro: 0.7880
- Recall-samples: 0.7858
- Recall-weighted: 0.7880
- F1-micro: 0.7685
- F1-samples: 0.7537
- F1-weighted: 0.7615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision-micro | Precision-samples | Precision-weighted | Recall-micro | Recall-samples | Recall-weighted | F1-micro | F1-samples | F1-weighted |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:-----------------:|:------------------:|:------------:|:--------------:|:---------------:|:--------:|:----------:|:-----------:|
| 0.8999 | 0.99 | 94 | 0.8742 | 0.3889 | 0.0272 | 0.1308 | 0.0169 | 0.0188 | 0.0169 | 0.0323 | 0.0202 | 0.0280 |
| 0.7377 | 2.0 | 189 | 0.6770 | 0.4727 | 0.4996 | 0.5333 | 0.5639 | 0.5782 | 0.5639 | 0.5143 | 0.4883 | 0.4998 |
| 0.5582 | 2.99 | 283 | 0.5552 | 0.5111 | 0.5585 | 0.5685 | 0.7229 | 0.7357 | 0.7229 | 0.5988 | 0.5959 | 0.6175 |
| 0.3943 | 4.0 | 378 | 0.4713 | 0.5616 | 0.6397 | 0.5869 | 0.7904 | 0.8071 | 0.7904 | 0.6567 | 0.6761 | 0.6611 |
| 0.2883 | 4.99 | 472 | 0.4555 | 0.6384 | 0.6969 | 0.6444 | 0.7446 | 0.7641 | 0.7446 | 0.6874 | 0.6901 | 0.6854 |
| 0.2112 | 6.0 | 567 | 0.4459 | 0.6443 | 0.6968 | 0.6637 | 0.7855 | 0.7942 | 0.7855 | 0.7079 | 0.7123 | 0.7068 |
| 0.1608 | 6.99 | 661 | 0.4212 | 0.6508 | 0.7071 | 0.6586 | 0.7904 | 0.7931 | 0.7904 | 0.7138 | 0.7161 | 0.7116 |
| 0.1247 | 8.0 | 756 | 0.4177 | 0.6633 | 0.7145 | 0.6650 | 0.7976 | 0.8006 | 0.7976 | 0.7243 | 0.7193 | 0.7195 |
| 0.1031 | 8.99 | 850 | 0.4435 | 0.7277 | 0.7523 | 0.7306 | 0.7855 | 0.7875 | 0.7855 | 0.7555 | 0.7425 | 0.7487 |
| 0.0851 | 10.0 | 945 | 0.4522 | 0.7380 | 0.7623 | 0.7465 | 0.7807 | 0.7795 | 0.7807 | 0.7588 | 0.7432 | 0.7516 |
| 0.074 | 10.99 | 1039 | 0.4548 | 0.7359 | 0.7663 | 0.7368 | 0.7855 | 0.7910 | 0.7855 | 0.7599 | 0.7490 | 0.7521 |
| 0.0648 | 12.0 | 1134 | 0.4430 | 0.7425 | 0.7676 | 0.7437 | 0.7783 | 0.7781 | 0.7783 | 0.76 | 0.7461 | 0.7540 |
| 0.0605 | 12.99 | 1228 | 0.4478 | 0.7366 | 0.7651 | 0.7379 | 0.7952 | 0.7948 | 0.7952 | 0.7648 | 0.7545 | 0.7579 |
| 0.0566 | 14.0 | 1323 | 0.4574 | 0.7506 | 0.7708 | 0.7519 | 0.7904 | 0.7893 | 0.7904 | 0.7700 | 0.7546 | 0.7625 |
| 0.0546 | 14.92 | 1410 | 0.4541 | 0.75 | 0.7708 | 0.7517 | 0.7880 | 0.7858 | 0.7880 | 0.7685 | 0.7537 | 0.7615 |
| Category | Precision | Recall | F1 | Suport |
|:-------------------------------------------:|:---------:|:------:|:----:|:-----------:|
|Active mobility |0.70 |0.894 |0.7908| 19.0 |
|Alternative fuels |0.804 | 0.865 |0.833 | 52.0 |
|Aviation improvements |0.700 | 1.00 |0.824 | 7.0 |
|Comprehensive transport planning |0.750 | 0.571 |0.649 | 21.0 |
|Digital solutions | 0.708 | 0.772 |0.739 | 22.0 |
|Economic instruments |0.742 | 0.821 |0.780 | 28.0 |
|Education and behavioral change |0.727 | 0.727 |0.727 | 11.0 |
|Electric mobility |0.766 | 0.922 |0.837 | 64.0 |
|Freight efficiency improvements |0.768 |0.650 |0.703 | 20.0 |
|Improve infrastructure |0.638 | 0.857 |0.732 | 35.0 |
|Land use |1.00 | 0.625 |0.769 | 8.0 |
|Other Transport Category |0.600 | 0.27 |0.375 | 11.0 |
|Public transport improvement |0.777 | 0.833 |0.804 | 42.0 |
|Shipping improvements |0.846 | 0.846 |0.846 | 13.0 |
|Transport demand management |0.666 |0.40 |0.500 | 15.0 |
|Vehicle improvements |0.783 | 0.766 |0.774 | 47.0 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.0473 kg of CO2
- **Hours Used**: 0.996 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.30GHz
- **RAM Size**: 12.67 GB
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|