Edit model card


SaudiBERT is the first pre-trained large language model focused exclusively on Saudi dialect text. The model was pretrained on two large-scale corpora: the Saudi Tweets Mega Corpus (STMC), which contains +141 million tweets, and the Saudi Forum Corpus, which includes +70 million sentences collected from various Saudi online forums. The datasets comprise 26.3GB of text. The code files along with the results are available on repo.

BibTex

If you use SaudiBERT model in your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (citation details to be updated):

@article{qarah2024saudibert,
  title={SaudiBERT: A Large Language Model Pretrained on Saudi Dialect Corpora},
  author={Qarah, Faisal},
  journal={arXiv preprint arXiv:2405.06239},
  year={2024}
}

Downloads last month
233
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using faisalq/SaudiBERT 1