Update README.md
Browse files
README.md
CHANGED
@@ -9,8 +9,7 @@ license: cc-by-nc-sa-4.0
|
|
9 |
# RoBERTweetTurkCovid (uncased)
|
10 |
|
11 |
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
|
12 |
-
The pretrained corpus is a Turkish tweets collection related to COVID-19.
|
13 |
-
https://arxiv.org/...
|
14 |
|
15 |
Model architecture is similar to RoBERTa-base (12 layers, 12 heads, and 768 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 30k.
|
16 |
|
|
|
9 |
# RoBERTweetTurkCovid (uncased)
|
10 |
|
11 |
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
|
12 |
+
The pretrained corpus is a Turkish tweets collection related to COVID-19.
|
|
|
13 |
|
14 |
Model architecture is similar to RoBERTa-base (12 layers, 12 heads, and 768 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 30k.
|
15 |
|