Ara-dialect-BERT
We used a pretrained model to further train it on HARD-Arabic-Dataset, the weights were initialized using CAMeL-Lab "bert-base-camelbert-msa-eighth" model
Usage
The model weights can be loaded using transformers
library by HuggingFace.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("MutazYoune/Ara_DialectBERT")
model = AutoModel.from_pretrained("MutazYoune/Ara_DialectBERT")
Example using pipeline
:
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="MutazYoune/Ara_DialectBERT",
tokenizer="MutazYoune/Ara_DialectBERT"
)
fill_mask("ุงูููุฏู ุฌู
ูู ู ููู [MASK] ุจุนูุฏ")
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูู
ููุน ุจุนูุฏ', 'score': 0.28233852982521057, 'token': 3221, 'token_str': 'ุงูู
ููุน'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ู
ููุนู ุจุนูุฏ', 'score': 0.24436227977275848, 'token': 19218, 'token_str': 'ู
ููุนู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูู
ูุงู ุจุนูุฏ', 'score': 0.15372352302074432, 'token': 5401, 'token_str': 'ุงูู
ูุงู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูููุฏู ุจุนูุฏ', 'score': 0.029026474803686142, 'token': 11133, 'token_str': 'ุงูููุฏู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ู
ูุงูู ุจุนูุฏ', 'score': 0.024554792791604996, 'token': 10701, 'token_str': 'ู
ูุงูู'}
- Downloads last month
- 4