|
--- |
|
datasets: |
|
- tner/tweetner7 |
|
metrics: |
|
- f1 |
|
- precision |
|
- recall |
|
model-index: |
|
- name: cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020 |
|
results: |
|
- task: |
|
name: Token Classification |
|
type: token-classification |
|
dataset: |
|
name: tner/tweetner7 |
|
type: tner/tweetner7 |
|
args: tner/tweetner7 |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 0.6419150543257219 |
|
- name: Precision |
|
type: precision |
|
value: 0.6451010159990658 |
|
- name: Recall |
|
type: recall |
|
value: 0.6387604070305273 |
|
- name: F1 (macro) |
|
type: f1_macro |
|
value: 0.5829431071584856 |
|
- name: Precision (macro) |
|
type: precision_macro |
|
value: 0.5886989381701707 |
|
- name: Recall (macro) |
|
type: recall_macro |
|
value: 0.5796110916728531 |
|
- name: F1 (entity span) |
|
type: f1_entity_span |
|
value: 0.7753631609529343 |
|
- name: Precision (entity span) |
|
type: precision_entity_span |
|
value: 0.7791661800770758 |
|
- name: Recall (entity span) |
|
type: recall_entity_span |
|
value: 0.7715970856944605 |
|
|
|
pipeline_tag: token-classification |
|
widget: |
|
- text: "Jacob Collier is a Grammy awarded artist from England." |
|
example_title: "NER Example 1" |
|
--- |
|
# cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020 |
|
|
|
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co./cardiffnlp/twitter-roberta-base-2022-154m) on the |
|
[tner/tweetner7](https://huggingface.co./datasets/tner/tweetner7) dataset. |
|
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository |
|
for more detail). It achieves the following results on the test set: |
|
- F1 (micro): 0.6419150543257219 |
|
- Precision (micro): 0.6451010159990658 |
|
- Recall (micro): 0.6387604070305273 |
|
- F1 (macro): 0.5829431071584856 |
|
- Precision (macro): 0.5886989381701707 |
|
- Recall (macro): 0.5796110916728531 |
|
|
|
The per-entity breakdown of the F1 score on the test set are below: |
|
- corporation: 0.5127020785219399 |
|
- event: 0.43384759233286585 |
|
- group: 0.6000666000666002 |
|
- location: 0.6535326086956522 |
|
- person: 0.8390577234310376 |
|
- product: 0.6386386386386387 |
|
- work_of_art: 0.40275650842266464 |
|
|
|
For F1 scores, the confidence interval is obtained by bootstrap as below: |
|
- F1 (micro): |
|
|
|
- F1 (macro): |
|
|
|
|
|
Full evaluation can be found at [metric file of NER](https://huggingface.co./cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/eval/metric.json) |
|
and [metric file of entity span](https://huggingface.co./cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/eval/metric_span.json). |
|
|
|
### Usage |
|
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip |
|
```shell |
|
pip install tner |
|
``` |
|
and activate model as below. |
|
```python |
|
from tner import TransformersNER |
|
model = TransformersNER("cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020") |
|
model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) |
|
``` |
|
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- dataset: ['tner/tweetner7'] |
|
- dataset_split: train_2020 |
|
- dataset_name: None |
|
- local_dataset: None |
|
- model: cardiffnlp/twitter-roberta-base-2022-154m |
|
- crf: True |
|
- max_length: 128 |
|
- epoch: 30 |
|
- batch_size: 32 |
|
- lr: 0.0001 |
|
- random_seed: 42 |
|
- gradient_accumulation_steps: 1 |
|
- weight_decay: 1e-07 |
|
- lr_warmup_step_ratio: 0.3 |
|
- max_grad_norm: 10 |
|
|
|
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co./cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/trainer_config.json). |
|
|
|
### Reference |
|
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). |
|
|
|
``` |
|
|
|
@inproceedings{ushio-camacho-collados-2021-ner, |
|
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", |
|
author = "Ushio, Asahi and |
|
Camacho-Collados, Jose", |
|
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", |
|
month = apr, |
|
year = "2021", |
|
address = "Online", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://aclanthology.org/2021.eacl-demos.7", |
|
doi = "10.18653/v1/2021.eacl-demos.7", |
|
pages = "53--62", |
|
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", |
|
} |
|
|
|
``` |
|
|