matejulcar
commited on
Commit
•
cac6701
1
Parent(s):
9d6b7e4
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- et
|
4 |
+
|
5 |
+
license: cc-by-sa-4.0
|
6 |
+
---
|
7 |
+
# Usage
|
8 |
+
Load in transformers library with:
|
9 |
+
```
|
10 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
11 |
+
|
12 |
+
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta", use_fast=False)
|
13 |
+
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta")
|
14 |
+
```
|
15 |
+
**NOTE**: it is currently *critically important* to add `use_fast=False` parameter to tokenizer if using transformers version 4+ (prior versions have `use_fast=False` as default) By default it attempts to load a fast tokenizer, which might work (ie. not result in an error), but not correctly, as there is no current support for fast tokenizers for Camembert-based models.
|
16 |
+
|
17 |
+
# Est-RoBERTa
|
18 |
+
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
|
19 |
+
|
20 |
+
Est-RoBERTa was trained for 40 epochs.
|
21 |
+
|