File size: 5,623 Bytes
f48df0b 02b7e4c f48df0b 02b7e4c f48df0b 02b7e4c f48df0b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
---
license: mit
language:
- en
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-english-legislative-cap-v3
## Model description
An `xlm-roberta-large` model finetuned on english training data containing texts of the `legislative` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-english-legislative-cap-v3',
num_labels=22,
problem_type="multi_label_classification") )
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-english-legislative-cap-v3` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 148474 examples (10% of the available data).<br>
Model accuracy is **0.9**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.85 | 0.8 | 0.83 | 5794 |
| 1 | 0.83 | 0.81 | 0.82 | 2920 |
| 2 | 0.91 | 0.92 | 0.92 | 10336 |
| 3 | 0.9 | 0.92 | 0.91 | 5258 |
| 4 | 0.82 | 0.9 | 0.86 | 4604 |
| 5 | 0.91 | 0.93 | 0.92 | 6173 |
| 6 | 0.87 | 0.89 | 0.88 | 5111 |
| 7 | 0.88 | 0.92 | 0.9 | 4251 |
| 8 | 0.87 | 0.91 | 0.89 | 1517 |
| 9 | 0.89 | 0.92 | 0.91 | 8119 |
| 10 | 0.91 | 0.9 | 0.9 | 10326 |
| 11 | 0.87 | 0.88 | 0.88 | 5471 |
| 12 | 0.86 | 0.86 | 0.86 | 3078 |
| 13 | 0.88 | 0.86 | 0.87 | 9050 |
| 14 | 0.88 | 0.89 | 0.88 | 10197 |
| 15 | 0.87 | 0.88 | 0.87 | 2556 |
| 16 | 0.92 | 0.92 | 0.92 | 4821 |
| 17 | 0.86 | 0.86 | 0.86 | 4106 |
| 18 | 0.9 | 0.87 | 0.89 | 17295 |
| 19 | 0.9 | 0.9 | 0.9 | 10681 |
| 20 | 1 | 0.2 | 0.33 | 25 |
| 21 | 0.99 | 0.96 | 0.97 | 16785 |
| macro avg | 0.89 | 0.86 | 0.86 | 148474 |
| weighted avg | 0.9 | 0.9 | 0.9 | 148474 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
|