dominguesm commited on
Commit
7c4cc86
·
1 Parent(s): 44afd71

Add README

Browse files
Files changed (1) hide show
  1. README.md +189 -0
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - ar
5
+ - bg
6
+ - de
7
+ - el
8
+ - en
9
+ - es
10
+ - fr
11
+ - hi
12
+ - it
13
+ - ja
14
+ - nl
15
+ - pl
16
+ - pt
17
+ - ru
18
+ - sw
19
+ - th
20
+ - tr
21
+ - ur
22
+ - vi
23
+ - zh
24
+ license: mit
25
+ tags:
26
+ - peft
27
+ - LoRA
28
+ - language-detection
29
+ - xlm-roberta-base
30
+ metrics:
31
+ - accuracy
32
+ - f1
33
+ model-index:
34
+ - name: xlm-roberta-base-lora-language-detection
35
+ results:
36
+ - task:
37
+ type: text-classification
38
+ dataset:
39
+ type: papluca/language-identification
40
+ name: papluca/language-identification
41
+ metrics:
42
+ - type: accuracy
43
+ value: 99.43
44
+ name: Accuracy
45
+ - type: f1
46
+ value: 99.43
47
+ name: F1 Score
48
+ inference: false
49
+ ---
50
+
51
+ # xlm-roberta-base-lora-language-detection
52
+
53
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset. Using the [PEFT-LoRA](https://github.com/huggingface/peft/) method to only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs.
54
+
55
+ ## Model description
56
+
57
+ This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output).
58
+ For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al.
59
+
60
+ ## Intended uses & limitations
61
+
62
+ You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:
63
+
64
+ `arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
65
+
66
+ ## Training and evaluation data
67
+
68
+ The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.4%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table.
69
+
70
+ | Language | Precision | Recall | F1-score | support |
71
+ |:--------:|:---------:|:------:|:--------:|:-------:|
72
+ |ar | 1.000 |0.998 |0.999 | 500 |
73
+ |bg | 0.992 |1.000 |0.996 | 500 |
74
+ |de | 1.000 |1.000 |1.000 | 500 |
75
+ |el | 1.000 |1.000 |1.000 | 500 |
76
+ |en | 0.992 |0.992 |0.992 | 500 |
77
+ |es | 0.994 |0.992 |0.993 | 500 |
78
+ |fr | 0.998 |0.998 |0.998 | 500 |
79
+ |hi | 0.945 |1.000 |0.972 | 500 |
80
+ |it | 1.000 |0.984 |0.992 | 500 |
81
+ |ja | 1.000 |1.000 |1.000 | 500 |
82
+ |nl | 0.996 |0.992 |0.994 | 500 |
83
+ |pl | 0.992 |0.988 |0.990 | 500 |
84
+ |pt | 0.988 |0.986 |0.987 | 500 |
85
+ |ru | 0.998 |0.996 |0.997 | 500 |
86
+ |sw | 0.992 |0.994 |0.993 | 500 |
87
+ |th | 1.000 |1.000 |1.000 | 500 |
88
+ |tr | 1.000 |1.000 |1.000 | 500 |
89
+ |ur | 1.000 |0.964 |0.982 | 500 |
90
+ |vi | 1.000 |1.000 |1.000 | 500 |
91
+ |zh | 1.000 |1.000 |1.000 | 500 |
92
+
93
+ ### Benchmarks
94
+
95
+ As a baseline to compare `xlm-roberta-base-lora-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below.
96
+
97
+ | Language | Precision | Recall | F1-score | support |
98
+ |:--------:|:---------:|:------:|:--------:|:-------:|
99
+ |ar |0.990 |0.970 |0.980 |500 |
100
+ |bg |0.998 |0.964 |0.981 |500 |
101
+ |de |0.992 |0.944 |0.967 |500 |
102
+ |el |1.000 |0.998 |0.999 |500 |
103
+ |en |1.000 |1.000 |1.000 |500 |
104
+ |es |1.000 |0.968 |0.984 |500 |
105
+ |fr |0.996 |1.000 |0.998 |500 |
106
+ |hi |0.949 |0.976 |0.963 |500 |
107
+ |it |0.990 |0.980 |0.985 |500 |
108
+ |ja |0.927 |0.988 |0.956 |500 |
109
+ |nl |0.980 |1.000 |0.990 |500 |
110
+ |pl |0.986 |0.996 |0.991 |500 |
111
+ |pt |0.950 |0.996 |0.973 |500 |
112
+ |ru |0.996 |0.974 |0.985 |500 |
113
+ |sw |1.000 |1.000 |1.000 |500 |
114
+ |th |1.000 |0.996 |0.998 |500 |
115
+ |tr |0.990 |0.968 |0.979 |500 |
116
+ |ur |0.998 |0.996 |0.997 |500 |
117
+ |vi |0.971 |0.990 |0.980 |500 |
118
+ |zh |1.000 |1.000 |1.000 |500 |
119
+
120
+ ## Using the model for inference
121
+
122
+ ```python
123
+ # pip install -q loralib transformers
124
+ # pip install -q git+https://github.com/huggingface/peft.git@main
125
+
126
+ import torch
127
+ from peft import PeftConfig, PeftModel
128
+ from transformers import (
129
+ AutoConfig,
130
+ AutoModelForSequenceClassification,
131
+ AutoTokenizer,
132
+ pipeline,
133
+ )
134
+
135
+ peft_model_id = "dominguesm/xlm-roberta-base-lora-language-detection"
136
+
137
+ # Load the Peft model config
138
+ peft_config = PeftConfig.from_pretrained(peft_model_id)
139
+
140
+ # Load the base model config
141
+ base_config = AutoConfig.from_pretrained(peft_config.base_model_name_or_path)
142
+
143
+ # Load the base model
144
+ base_model = AutoModelForSequenceClassification.from_pretrained(
145
+ peft_config.base_model_name_or_path, config=base_config
146
+ )
147
+
148
+ # Load the tokenizer
149
+ tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path)
150
+
151
+ # Load the inference model
152
+ inference_model = PeftModel.from_pretrained(base_model, peft_model_id)
153
+
154
+ # Load the pipeline
155
+ pipe = pipeline("text-classification", model=inference_model, tokenizer=tokenizer)
156
+
157
+
158
+ def detect_lang(text: str) -> str:
159
+ # This code runs on CPU, so we use torch.cpu.amp.autocast to perform
160
+ # automatic mixed precision.
161
+ with torch.cpu.amp.autocast():
162
+ # or `with torch.cuda.amp.autocast():`
163
+ pred = pipe(text)
164
+ return pred
165
+
166
+
167
+ detect_lang(
168
+ "Cada qual sabe amar a seu modo; o modo, pouco importa; o essencial é que saiba amar."
169
+ )
170
+ # [{'label': 'pt', 'score': 0.9959434866905212}]
171
+
172
+ ```
173
+
174
+ ## Training procedure
175
+
176
+ * WIP
177
+
178
+ ### Framework versions
179
+
180
+ - torch 1.13.1+cu116
181
+ - datasets 2.10.1
182
+ - sklearn 1.2.1
183
+ - transformers 4.27.0.dev0
184
+ - langid 1.1.6
185
+ - peft 0.3.0.dev0
186
+
187
+ ## Note
188
+
189
+ This study was fully based and inspired by the [xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) model, developed by [Luca Papariello](https://github.com/LucaPapariello).