File size: 7,967 Bytes
7d0f800
 
9eed4ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f9ec91
 
 
 
 
 
7d0f800
 
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
 
 
 
 
 
 
 
 
 
 
7d0f800
9eed4ee
7d0f800
9eed4ee
f7707bf
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
 
 
 
 
7d0f800
 
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
7d0f800
 
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
7d0f800
9eed4ee
 
 
 
 
 
 
 
 
 
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
 
 
 
 
 
 
 
 
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
 
 
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d0f800
9eed4ee
7d0f800
9eed4ee
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
---
library_name: transformers
license: apache-2.0
language:
  - af
  - am
  - ar
  - as
  - az
  - be
  - bg
  - bn
  - br
  - bs
  - ca
  - cs
  - cy
  - da
  - de
  - el
  - en
  - eo
  - es
  - et
  - eu
  - fa
  - fi
  - fr
  - fy
  - ga
  - gd
  - gl
  - gu
  - ha
  - he
  - hi
  - hr
  - hu
  - hy
  - id
  - is
  - it
  - ja
  - jv
  - ka
  - kk
  - km
  - kn
  - ko
  - ku
  - ky
  - la
  - lo
  - lt
  - lv
  - mg
  - mk
  - ml
  - mn
  - mr
  - ms
  - my
  - ne
  - nl
  - 'no'
  - om
  - or
  - pa
  - pl
  - ps
  - pt
  - ro
  - ru
  - sa
  - sd
  - si
  - sk
  - sl
  - so
  - sq
  - sr
  - su
  - sv
  - sw
  - ta
  - te
  - th
  - tl
  - tr
  - ug
  - uk
  - ur
  - uz
  - vi
  - xh
  - yi
  - zh
base_model:
- SIRIS-Lab/affilgood-affilxlm
tags:
- affiliations
- ner
- science
---

# AffilGood-NER-multilingual

## Overview

<details>
<summary>Click to expand</summary>
  
- **Model type:** Language Model
- **Architecture:** XLM-RoBERTa-base
- **Language:** Multilingual
- **License:** Apache 2.0
- **Task:** Named Entity Recognition
- **Data:** AffilGood-NER
- **Additional Resources:**
  - [Paper](https://https://aclanthology.org/2024.sdp-1.13/)
  - [GitHub](https://github.com/sirisacademic/affilgood)
</details>

## Model description

The multilingual version of **affilgood-NER-multilingual** is a Named Entity Recognition (NER) model for identifying named entities in raw affiliation strings from scientific papers and projects,
fine-tuned from the [AffilXLM](https://huggingface.co./SIRIS-Lab/affilgood-affilxlm) model, a [XLM-RoBERTa](https://arxiv.org/abs/1911.02116) base model futher pre-trained for MLM task on a medium-size corpus of raw affiliation stirngs collected from OpenAlex.

It has been trained with a dataset that contains 7 main types of entities from multilingual raw affiliation strings texts, with  5,266 texts. 

After analyzing hundreds of affiliations from multiple countries and languages, we defined seven entity types: `SUB-ORGANISATION`, `ORGANISATION`, `CITY`, `COUNTRY`, `ADDRESS`, `POSTAL-CODE`, and `REGION`, detailed [annotation guidelines here].

**Identifying named entities** (organization names, cities, countries) in affiliation strings not only enables more effective linking with external organization registries, but it can also play an essential role in the geolocation of organizations and can also contribute to identify organizations and their position in an institutional hierarchy -- especially for those not listed in external databases. Information automatically extracted by means of a NER model can also facilitate the construction of knowledge graphs, and support the development of manually curated registries.

## Intended Usage

This model is intended to be used for multilingual raw affiliation strings, because this model is pre-trained on XLM-RoBERTa, NER and large further pre-training corpora are both multilingual.

## How to use

```python
from transformers import pipeline
affilgood_ner_pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
sentence = "CSIC, Global ecology Unit CREAF-CSIC-UAB, Bellaterra 08193, Catalonia, Spain."
output = affilgood_ner_pipeline(sentence)
print(output)
```


## Limitations and bias

No measures have been taken to estimate the bias and toxicity embedded in the model.

The NER dataset contains 5,266 raw affiliation strings obtained from OpenAlex. 
It includes multilingual samples from all available countries and geographies to ensure comprehensive coverage and diversity. 
To enable our model to recognize various affiliation string formats, the dataset includes a wide range of structures, different ways of grouping main and subsidiary institutions and various methods of separating organization names. We also included ill-formed affiliations and those containing errors resulting from automatic extraction from PDF files.


## Training

We used the [AffilGood-NER dataset](link) for training and evaluation.

We fine-tuned the adapted and base models for token classification with the IOB annotation schema. 
We trained the models for 25 epochs, using 80% of the dataset for training, 10% for validation and 10% for testing. 

Hyperparameters used for training are described here:
- Learning Rate: 2e-5  
- Learning Rate Decay: Linear  
- Weight Decay: 0.01  
- Warmup Portion: 0.06  
- Batch Size: 128  
- Number of Steps: 25k steps  
- Adam ε: 1e-6  
- Adam β<sub>1</sub>: 0.9  
- Adam β<sub>2</sub>: 0.999  

The **best performing epoch (considering macro-averaged F1 with *strict* matching criteria) was used to select the model**. 

### Evaluation

The model's performance was evaluated on a 10% of the dataset. 

| Category| RoBERTa | XLM | AffilRoBERTa | **AffilXLM (this model)** |
|-----|------|------|------|----------|
| ALL | .910 | .915 | .920 | **.925** |
|-----|------|------|------|----------|
| ORG | .869 | .886 | .879 | **.906** |
| SUB | .898 | .890 | **.911** | .892 |
| CITY | .936 | .941 | .950 | **.958** |
| COUNTRY | .971 | .973 | **.980** | .970 |
| REGION | .870 | .876 | .874 | **.882** |
| POSTAL | .975 | .975 | **.981** | .966 |
| ADDRESS | .804 | .811 | .794 | **.869** |

All the numbers reported above represent F1-score with *strict* match, when both the boundaries and types of the entities match.

## Additional information

### Authors 

- SIRIS Lab, Research Division of SIRIS Academic, Barcelona, Spain
- LaSTUS Lab, TALN Group, Universitat Pompeu Fabra, Barcelona, Spain
- Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland

### Contact

For further information, send an email to either <[email protected]> or <[email protected]>.

### License

This work is distributed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).

### Funding

This work was partially funded and supporter by:
- Industrial Doctorates Plan of the Department of Research and Universities of the Generalitat de Catalunya, by Departament de Recerca i Universitats de la Generalitat de Catalunya (ajuts SGR-Cat 2021),
- Maria de Maeztu Units of Excellence Programme CEX2021-001195-M, funded by MCIN/AEI /10.13039/501100011033
- EU HORIZON SciLake (Grant Agreement 101058573)
- EU HORIZON ERINIA (Grant Agreement 101060930)

### Citation

```bibtex
@inproceedings{duran-silva-etal-2024-affilgood,
    title = "{A}ffil{G}ood: Building reliable institution name disambiguation tools to improve scientific literature analysis",
    author = "Duran-Silva, Nicolau  and
      Accuosto, Pablo  and
      Przyby{\l}a, Piotr  and
      Saggion, Horacio",
    editor = "Ghosal, Tirthankar  and
      Singh, Amanpreet  and
      Waard, Anita  and
      Mayr, Philipp  and
      Naik, Aakanksha  and
      Weller, Orion  and
      Lee, Yoonjoo  and
      Shen, Shannon  and
      Qin, Yanxia",
    booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.sdp-1.13",
    pages = "135--144",
}
```

### Disclaimer

<details>
<summary>Click to expand</summary>

The model published in this repository is intended for a generalist purpose 
and is made available to third parties under a Apache v2.0 License.

Please keep in mind that the model may have bias and/or any other undesirable distortions. 
When third parties deploy or provide systems and/or services to other parties using this model 
(or a system based on it) or become users of the model itself, they should note that it is under 
their responsibility to mitigate the risks arising from its use and, in any event, to comply with 
applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owners and creators of the model be liable for any results arising from the use made by third parties.
</details>