File size: 1,620 Bytes
e85971a 14737bf e85971a f684a95 e85971a f684a95 e85971a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
language: ar
datasets:
- HARD-Arabic-Dataset
---
# Ara-dialect-BERT
We used a pretrained model to further train it on [HARD-Arabic-Dataset](https://github.com/elnagara/HARD-Arabic-Dataset), the weights were initialized using [CAMeL-Lab](https://huggingface.co./CAMeL-Lab/bert-base-camelbert-msa-eighth) "bert-base-camelbert-msa-eighth" model
### Usage
The model weights can be loaded using `transformers` library by HuggingFace.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("MutazYoune/Ara_DialectBERT")
model = AutoModel.from_pretrained("MutazYoune/Ara_DialectBERT")
```
Example using `pipeline`:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="MutazYoune/Ara_DialectBERT",
tokenizer="MutazYoune/Ara_DialectBERT"
)
fill_mask("ุงูููุฏู ุฌู
ูู ู ููู [MASK] ุจุนูุฏ")
```
```python
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูู
ููุน ุจุนูุฏ', 'score': 0.28233852982521057, 'token': 3221, 'token_str': 'ุงูู
ููุน'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ู
ููุนู ุจุนูุฏ', 'score': 0.24436227977275848, 'token': 19218, 'token_str': 'ู
ููุนู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูู
ูุงู ุจุนูุฏ', 'score': 0.15372352302074432, 'token': 5401, 'token_str': 'ุงูู
ูุงู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูููุฏู ุจุนูุฏ', 'score': 0.029026474803686142, 'token': 11133, 'token_str': 'ุงูููุฏู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ู
ูุงูู ุจุนูุฏ', 'score': 0.024554792791604996, 'token': 10701, 'token_str': 'ู
ูุงูู'}
|