Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,8 @@ license: apache-2.0
|
|
15 |
|
16 |
This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text.
|
17 |
|
|
|
|
|
18 |
Comparing this model to [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and [CroSloEngual BERT](https://huggingface.co/EMBEDDIA/crosloengual-bert) on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two.
|
19 |
|
20 |
## Part-of-speech tagging
|
|
|
15 |
|
16 |
This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text.
|
17 |
|
18 |
+
***new*** We have published a version of this model fine-tuned on the named entity recognition task ([bcms-bertic-ner](https://huggingface.co/CLASSLA/bcms-bertic-ner)).
|
19 |
+
|
20 |
Comparing this model to [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and [CroSloEngual BERT](https://huggingface.co/EMBEDDIA/crosloengual-bert) on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two.
|
21 |
|
22 |
## Part-of-speech tagging
|