GIZ
/

File size: 5,149 Bytes
e179893
 
 
 
 
 
 
 
39ae53b
 
 
 
 
 
 
 
 
 
 
 
 
 
e179893
 
 
 
 
 
 
39ae53b
e179893
 
 
 
 
 
 
 
 
 
 
 
 
 
39ae53b
 
 
 
e179893
 
 
39ae53b
 
 
e179893
 
 
39ae53b
 
 
 
 
 
 
 
 
 
 
 
e179893
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39ae53b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e179893
 
 
 
 
 
39ae53b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: mit
base_model: BAAI/bge-base-en-v1.5
tags:
- generated_from_trainer
model-index:
- name: CONDITIONAL-multilabel-bge
  results: []
datasets:
- GIZ/policy_classification
library_name: transformers
pipeline_tag: text-classification

co2_eq_emissions:
  emissions: 28.4522411264774
  source: codecarbon
  training_type: fine-tuning
  on_cloud: true
  cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
  ram_total_size: 12.6747894287109
  hours_used: 0.702
  hardware_used: 1 x Tesla T4
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# CONDITIONAL-multilabel-bge

This model is a fine-tuned version of [BAAI/bge-base-en-v1.5](https://huggingface.co./BAAI/bge-base-en-v1.5) on the [Policy-Classification](https://huggingface.co./datasets/GIZ/policy_classification) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5295
- Precision-micro: 0.5138
- Precision-samples: 0.1866
- Precision-weighted: 0.5169
- Recall-micro: 0.7378
- Recall-samples: 0.1874
- Recall-weighted: 0.7378
- F1-micro: 0.6058
- F1-samples: 0.1852
- F1-weighted: 0.6065

## Model description

The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 2 labels - 
ConditionalLabel, UnconditionalLabel - that are relevant to a particular task or application
- **Conditional**: In context of climate policy documents if certain Target/Action/Plan/Policy commitment is being made conditionally.
- **Unconditional**: In context of climate policy documents if certain Target/Action/Plan/Policy commitment is being made unconditionally.

## Intended uses & limitations

The dataset sometimes does not include the sub-heading/heading which indicates that the paragraph belongs to Conditional/Unconditional category. 
But has been copied from the relevant document from those sub-headings. This makes the assessment of Conditonality very difficult. Annotator when given only the paragraph without 
the full long context had a difficulty in assessing the conditionality of commitments being made in paragraph.

## Training and evaluation data

- Training Dataset: 5901
| Class | Positive Count of Class|
|:-------------|:--------|
| ConditionalLabel | 1986 |
| UnconditionalLabel | 1312 |


- Validation Dataset: 1190
| Class | Positive Count of Class|
|:-------------|:--------|
| ConditionalLabel | 192 |
| UnconditionalLabel | 136 |

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 4.02e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 6

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision-micro | Precision-samples | Precision-weighted | Recall-micro | Recall-samples | Recall-weighted | F1-micro | F1-samples | F1-weighted |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:-----------------:|:------------------:|:------------:|:--------------:|:---------------:|:--------:|:----------:|:-----------:|
| 0.5361        | 1.0   | 369  | 0.4405          | 0.3405          | 0.1655            | 0.4102             | 0.6311       | 0.1622         | 0.6311          | 0.4423   | 0.1622     | 0.4503      |
| 0.3692        | 2.0   | 738  | 0.3437          | 0.4631          | 0.1794            | 0.4929             | 0.6890       | 0.1761         | 0.6890          | 0.5539   | 0.1762     | 0.5604      |
| 0.182         | 3.0   | 1107 | 0.3915          | 0.4702          | 0.1857            | 0.4871             | 0.7470       | 0.1891         | 0.7470          | 0.5771   | 0.1854     | 0.5800      |
| 0.0757        | 4.0   | 1476 | 0.4713          | 0.4960          | 0.1882            | 0.4986             | 0.7530       | 0.1908         | 0.7530          | 0.5981   | 0.1877     | 0.5987      |
| 0.0298        | 5.0   | 1845 | 0.4971          | 0.5161          | 0.1840            | 0.5184             | 0.7317       | 0.1857         | 0.7317          | 0.6053   | 0.1829     | 0.6058      |
| 0.0152        | 6.0   | 2214 | 0.5295          | 0.5138          | 0.1866            | 0.5169             | 0.7378       | 0.1874         | 0.7378          | 0.6058   | 0.1852     | 0.6065      |

|label          | precision |recall |f1-score| support|
|:-------------:|:---------:|:-----:|:------:|:------:|
|ConditionalLabel	|0.490   	|0.760  |0.595  |	192.0  |
|UnconditionalLabel	        |0.555    |0.706  |	0.621  |	136.0 |
|

### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.02845 kg of CO2
- **Hours Used**: 0.702 hours

### Training Hardware
- **On Cloud**: yes
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
- **RAM Size**: 12.67 GB



### Framework versions

- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2