File size: 3,897 Bytes
c1c2dea
 
 
 
 
 
b54738e
 
 
c1c2dea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f8a5b04
c1c2dea
 
 
 
 
 
 
 
 
bbe423e
c1c2dea
 
 
 
d8fa05d
c1c2dea
 
ab7e37c
c1c2dea
ab7e37c
 
d1599e9
ab7e37c
 
 
 
b5912ed
2c364fb
c1c2dea
fa065c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84da7be
 
 
 
 
 
fa065c4
3f09a10
c1c2dea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---

language:

- ca

license: apache-2.0


tags:

- "catalan"

- "semantic textual similarity"

- "sts-ca"

- "CaText"

- "Catalan Textual Corpus"

datasets:

- "projecte-aina/sts-ca"  

metrics:

- "pearson"


model-index:
- name: roberta-base-ca-cased-sts
  results:
  - task: 
      type: sentence-similarity
    dataset:
      type:   projecte-aina/sts-ca
      name: sts-ca
    metrics:
      - type: pearson
        value: 0.8120486139447483
        
---

# Catalan BERTa (RoBERTa-base) finetuned for Semantic Textual Similarity.

The **roberta-base-ca-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co./PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).

## Datasets
We used the STS dataset in Catalan called [STS-ca](https://huggingface.co./datasets/projecte-aina/sts-ca) for training and evaluation.

## Evaluation and results
We evaluated the _roberta-base-ca-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines:

| Model       | STS-ca (Pearson)   | 
|:------------|:----|
| roberta-base-ca-cased-sts | **81.20** |
| mBERT       | 76.34 | 
| XLM-RoBERTa | 75.40 |
| WikiBERT-ca | 77.18 | 


For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).

## How to use
To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code:

```python
from transformers import pipeline, AutoTokenizer
from scipy.special import logit

model = 'projecte-aina/roberta-base-ca-cased-sts'
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)

def prepare(sentence_pairs):
    sentence_pairs_prep = []
    for s1, s2 in sentence_pairs:
        sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}")
    return sentence_pairs_prep

sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."),
                  ("M'agrades.", "T'estimo."),
                  ("M'agrada el sol i la calor", "A la Garrotxa plou molt.")]

predictions = pipe(prepare(sentence_pairs), add_special_tokens=False)

# convert back to scores to the original 1 and 5 interval
for prediction in predictions:
    prediction['score'] = logit(prediction['score'])
print(predictions)
```
Expected output:
```
[{'label': 'SIMILARITY', 'score': 2.4280577200108384}, 
{'label': 'SIMILARITY', 'score': 2.132843521240822}, 
{'label': 'SIMILARITY', 'score': 1.615101695426227}]
```

<sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._
## Citing 
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}
```