Michael Beukman commited on
Commit
0826f5a
1 Parent(s): 9c6b837

Initial Commit

Browse files
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - am
4
+ tags:
5
+ - NER
6
+ - token-classification
7
+ datasets:
8
+ - masakhaner
9
+ metrics:
10
+ - f1
11
+ - precision
12
+ - recall
13
+ widget:
14
+ - text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
15
+ ---
16
+
17
+
18
+ # xlm-roberta-base-finetuned-ner-amharic
19
+ This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
20
+
21
+ More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
22
+
23
+ ## About
24
+ This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
25
+ The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
26
+
27
+ This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
28
+ This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
29
+
30
+ ### Contact & More information
31
+ For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
32
+
33
+ ### Training Resources
34
+ In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
35
+
36
+
37
+ ## Data
38
+ The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
39
+ The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
40
+
41
+ ## Intended Use
42
+ This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
43
+
44
+ ## Limitations
45
+ This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
46
+
47
+
48
+ Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
49
+
50
+ As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
51
+
52
+
53
+ Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
54
+
55
+ ### Privacy & Ethical Considerations
56
+ The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
57
+
58
+ No explicit ethical considerations or adjustments were made during fine-tuning of this model.
59
+
60
+ ## Metrics
61
+ The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
62
+
63
+ These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
64
+ We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
65
+
66
+ The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
67
+ ## Caveats and Recommendations
68
+ In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
69
+
70
+ ## Model Structure
71
+ Here are some performance details on this specific model, compared to others we trained.
72
+ All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
73
+
74
+ This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
75
+
76
+
77
+ Abbreviation|Description
78
+ -|-
79
+ O|Outside of a named entity
80
+ B-DATE |Beginning of a DATE entity right after another DATE entity
81
+ I-DATE |DATE entity
82
+ B-PER |Beginning of a person’s name right after another person’s name
83
+ I-PER |Person’s name
84
+ B-ORG |Beginning of an organisation right after another organisation
85
+ I-ORG |Organisation
86
+ B-LOC |Beginning of a location right after another location
87
+ I-LOC |Location
88
+
89
+
90
+
91
+ | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
92
+ | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
93
+ | [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) (This model) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
94
+ | [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
95
+ | [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
96
+ ## Usage
97
+ To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
98
+
99
+ ```
100
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
101
+ from transformers import pipeline
102
+ model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-amharic'
103
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
104
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
105
+
106
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer)
107
+ example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
108
+
109
+ ner_results = nlp(example)
110
+ print(ner_results)
111
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "xlm-roberta-base",
3
+ "architectures": [
4
+ "XLMRobertaForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "O",
15
+ "1": "B-DATE",
16
+ "2": "I-DATE",
17
+ "3": "B-PER",
18
+ "4": "I-PER",
19
+ "5": "B-ORG",
20
+ "6": "I-ORG",
21
+ "7": "B-LOC",
22
+ "8": "I-LOC"
23
+ },
24
+ "initializer_range": 0.02,
25
+ "intermediate_size": 3072,
26
+ "label2id": {
27
+ "B-DATE": 1,
28
+ "B-LOC": 7,
29
+ "B-ORG": 5,
30
+ "B-PER": 3,
31
+ "I-DATE": 2,
32
+ "I-LOC": 8,
33
+ "I-ORG": 6,
34
+ "I-PER": 4,
35
+ "O": 0
36
+ },
37
+ "layer_norm_eps": 1e-05,
38
+ "max_position_embeddings": 514,
39
+ "model_type": "xlm-roberta",
40
+ "num_attention_heads": 12,
41
+ "num_hidden_layers": 12,
42
+ "output_past": true,
43
+ "pad_token_id": 1,
44
+ "position_embedding_type": "absolute",
45
+ "torch_dtype": "float32",
46
+ "transformers_version": "4.11.3",
47
+ "type_vocab_size": 1,
48
+ "use_cache": true,
49
+ "vocab_size": 250002
50
+ }
eval_results.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ f1 = 0.7514792899408285
2
+ loss = 0.4058218260968829
3
+ precision = 0.7298850574712644
4
+ recall = 0.774390243902439
5
+ report = precision recall f1-score support
6
+
7
+ DATE 0.65 0.73 0.69 59
8
+ LOC 0.76 0.76 0.76 143
9
+ ORG 0.54 0.67 0.60 43
10
+ PER 0.86 0.88 0.87 83
11
+
12
+ micro avg 0.73 0.77 0.75 328
13
+ macro avg 0.70 0.76 0.73 328
14
+ weighted avg 0.74 0.77 0.75 328
15
+
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c92ca88845528fb39bdc41858ddc2aceff5ef5693f1f88bdcb19508f9ffe332
3
+ size 1109924593
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
test_predictions.txt ADDED
The diff for this file is too large to render. See raw diff
 
test_results.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ f1 = 0.7263249348392702
2
+ loss = 0.4312858919014298
3
+ precision = 0.7048903878583473
4
+ recall = 0.7491039426523297
5
+ report = precision recall f1-score support
6
+
7
+ DATE 0.76 0.76 0.76 106
8
+ LOC 0.72 0.78 0.75 227
9
+ ORG 0.49 0.55 0.52 83
10
+ PER 0.76 0.80 0.78 142
11
+
12
+ micro avg 0.70 0.75 0.73 558
13
+ macro avg 0.69 0.72 0.70 558
14
+ weighted avg 0.71 0.75 0.73 558
15
+
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "xlm-roberta-base", "tokenizer_class": "XLMRobertaTokenizer"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:201a82e9ba5547bcd73cbb4c2423c31abb596edacf06c325c4244b082bd1a7d7
3
+ size 1583