language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
- zh
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- ner
- named-entity-recognition
- span-marker
datasets:
- Babelscape/multinerd
metrics:
- precision
- recall
- f1
pipeline_tag: token-classification
widget:
- text: >-
Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B über den
Atlantik nach Paris.
example_title: German
- text: >-
Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic
to Paris.
example_title: English
- text: >-
Amelia Earthart voló su Lockheed Vega 5B monomotor a través del Océano
Atlántico hasta París.
example_title: Spanish
- text: >-
Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B à travers
l'ocean Atlantique jusqu'à Paris.
example_title: French
- text: >-
Amelia Earhart ha volato con il suo monomotore Lockheed Vega 5B attraverso
l'Atlantico fino a Parigi.
example_title: Italian
- text: >-
Amelia Earthart vloog met haar één-motorige Lockheed Vega 5B over de
Atlantische Oceaan naar Parijs.
example_title: Dutch
- text: >-
Amelia Earthart przeleciała swoim jednosilnikowym samolotem Lockheed Vega
5B przez Ocean Atlantycki do Paryża.
example_title: Polish
- text: >-
Amelia Earhart voou em seu monomotor Lockheed Vega 5B através do Atlântico
para Paris.
example_title: Portuguese
- text: >-
Амелия Эртхарт перелетела на своем одномоторном самолете Lockheed Vega 5B
через Атлантический океан в Париж.
example_title: Russian
- text: >-
Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafið til
Parísar.
example_title: Icelandic
- text: >-
Η Amelia Earthart πέταξε το μονοκινητήριο Lockheed Vega 5B της πέρα από
τον Ατλαντικό Ωκεανό στο Παρίσι.
example_title: Greek
- text: >-
Amelia Earhartová přeletěla se svým jednomotorovým Lockheed Vega 5B přes
Atlantik do Paříže.
example_title: Czech
- text: >-
Amelia Earhart lensi yksimoottorisella Lockheed Vega 5B:llä Atlantin yli
Pariisiin.
example_title: Finnish
- text: >-
Amelia Earhart fløj med sin enmotoriske Lockheed Vega 5B over Atlanten til
Paris.
example_title: Danish
- text: >-
Amelia Earhart flög sin enmotoriga Lockheed Vega 5B över Atlanten till
Paris.
example_title: Swedish
- text: >-
Amelia Earhart fløy sin enmotoriske Lockheed Vega 5B over Atlanterhavet
til Paris.
example_title: Norwegian
- text: >-
Amelia Earhart și-a zburat cu un singur motor Lockheed Vega 5B peste
Atlantic până la Paris.
example_title: Romanian
- text: >-
Amelia Earhart menerbangkan mesin tunggal Lockheed Vega 5B melintasi
Atlantik ke Paris.
example_title: Indonesian
- text: >-
Амелія Эрхарт пераляцела на сваім аднаматорным Lockheed Vega 5B праз
Атлантыку ў Парыж.
example_title: Belarusian
- text: >-
Амелія Ергарт перелетіла на своєму одномоторному літаку Lockheed Vega 5B
через Атлантику до Парижа.
example_title: Ukrainian
- text: >-
Amelia Earhart preletjela je svojim jednomotornim zrakoplovom Lockheed
Vega 5B preko Atlantika do Pariza.
example_title: Croatian
- text: >-
Amelia Earhart lendas oma ühemootoriga Lockheed Vega 5B üle Atlandi
ookeani Pariisi .
example_title: Estonian
base_model: bert-base-multilingual-cased
model-index:
- name: span-marker-bert-base-multilingual-cased-multinerd
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: MultiNERD
type: Babelscape/multinerd
split: test
revision: 2814b78e7af4b5a1f1886fe7ad49632de4d9dd25
metrics:
- type: f1
value: 0.927
name: F1
- type: precision
value: 0.9281
name: Precision
- type: recall
value: 0.9259
name: Recall
span-marker-bert-base-multilingual-cased-multinerd
This model is a fine-tuned version of bert-base-multilingual-cased on an Babelscape/multinerd dataset.
Is your data not (always) capitalized correctly? Then consider using the uncased variant of this model instead for better performance: lxyuan/span-marker-bert-base-multilingual-uncased-multinerd.
This model achieves the following results on the evaluation set:
- Loss: 0.0049
- Overall Precision: 0.9242
- Overall Recall: 0.9281
- Overall F1: 0.9261
- Overall Accuracy: 0.9852
Test set results:
- test_loss: 0.005226554349064827,
- test_overall_accuracy: 0.9851129807294873,
- test_overall_f1: 0.9270450073152169,
- test_overall_precision: 0.9281906912835416,
- test_overall_recall: 0.9259021481405626,
- test_runtime: 2690.9722,
- test_samples_per_second: 150.748,
- test_steps_per_second: 4.711
This is a replication of Tom's work. Everything remains unchanged, except that we extended the number of training epochs to 3 for a slightly longer training duration and set the gradient_accumulation_steps to 2. Please refer to the official model page to review their results and training script
Results:
Language | Precision | Recall | F1 |
---|---|---|---|
all | 92.42 | 92.81 | 92.61 |
de | 95.03 | 95.07 | 95.05 |
en | 95.00 | 95.40 | 95.20 |
es | 92.05 | 91.37 | 91.71 |
fr | 92.37 | 91.41 | 91.89 |
it | 91.45 | 93.15 | 92.29 |
nl | 93.85 | 92.98 | 93.41 |
pl | 93.13 | 92.66 | 92.89 |
pt | 93.60 | 92.50 | 93.05 |
ru | 93.25 | 93.32 | 93.29 |
zh | 89.47 | 88.40 | 88.93 |
- Special thanks to Tom for creating the evaluation script and generating the results.
Label set
Class | Description | Examples |
---|---|---|
PER (person) | People | Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey. |
ORG (organization) | Associations, companies, agencies, institutions, nationalities and religious or political groups | University of Edinburgh, San Francisco Giants, Google, Democratic Party. |
LOC (location) | Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). | Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River. |
ANIM (animal) | Breeds of dogs, cats and other animals, including their scientific names. | Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird. |
BIO (biological) | Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. | Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis. |
CEL (celestial) | Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. | Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis. |
DIS (disease) | Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. | Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis. |
EVE (event) | Sport events, battles, wars and other events. | American Civil War, 2003 Wimbledon Championships, Cannes Film Festival. |
FOOD (food) | Foods and drinks. | Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita. |
INST (instrument) | Technological instruments, mechanical instruments, musical instruments, and other tools. | Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster. |
MEDIA (media) | Titles of films, books, magazines, songs and albums, fictional characters and languages. | Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures. |
PLANT (plant) | Types of trees, flowers, and other plants, including their scientific names. | Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima. |
MYTH (mythological) | Mythological and religious entities. | Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules. |
TIME (time) | Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. | Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012. |
VEHI (vehicle) | Cars, motorcycles and other vehicles. | Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar. |
Inference Example
# install span_marker
(env)$ pip install span_marker
from span_marker import SpanMarkerModel
model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-cased-multinerd")
description = "Singapore is renowned for its hawker centers offering dishes \
like Hainanese chicken rice and laksa, while Malaysia boasts dishes such as \
nasi lemak and rendang, reflecting its rich culinary heritage."
entities = model.predict(description)
entities
>>>
[
{'span': 'Singapore', 'label': 'LOC', 'score': 0.999988317489624, 'char_start_index': 0, 'char_end_index': 9},
{'span': 'Hainanese chicken rice', 'label': 'FOOD', 'score': 0.9894770383834839, 'char_start_index': 66, 'char_end_index': 88},
{'span': 'laksa', 'label': 'FOOD', 'score': 0.9224908947944641, 'char_start_index': 93, 'char_end_index': 98},
{'span': 'Malaysia', 'label': 'LOC', 'score': 0.9999839067459106, 'char_start_index': 106, 'char_end_index': 114}]
# missed: nasi lemak as FOOD
# missed: rendang as FOOD
# :(
Quick test on Chinese
from span_marker import SpanMarkerModel
model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-cased-multinerd")
# translate to chinese
description = "Singapore is renowned for its hawker centers offering dishes \
like Hainanese chicken rice and laksa, while Malaysia boasts dishes such as \
nasi lemak and rendang, reflecting its rich culinary heritage."
zh_description = "新加坡因其小贩中心提供海南鸡饭和叻沙等菜肴而闻名, 而马来西亚则拥有椰浆饭和仁当等菜肴,反映了其丰富的烹饪传统."
entities = model.predict(zh_description)
entities
>>>
[
{'span': '新加坡', 'label': 'LOC', 'score': 0.9282007813453674, 'char_start_index': 0, 'char_end_index': 3},
{'span': '马来西亚', 'label': 'LOC', 'score': 0.7439665794372559, 'char_start_index': 27, 'char_end_index': 31}]
# It only managed to capture two countries: Singapore and Malaysia.
# All other entities were missed out.
Training procedure
One can reproduce the result running this script
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|
0.0129 | 1.0 | 50436 | 0.0042 | 0.9226 | 0.9169 | 0.9197 | 0.9837 |
0.0027 | 2.0 | 100873 | 0.0043 | 0.9255 | 0.9206 | 0.9230 | 0.9846 |
0.0015 | 3.0 | 151308 | 0.0049 | 0.9242 | 0.9281 | 0.9261 | 0.9852 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3