File size: 6,064 Bytes
0d5dea5
2b35c15
 
 
8ed1e4c
2b35c15
a549a6a
 
 
 
 
88fc9b2
a549a6a
 
 
 
 
 
2b35c15
 
 
 
 
cb93cef
2b35c15
b6d7675
7fedf7f
f8e7998
 
88fc9b2
f11d7ab
2b35c15
 
702b07c
32aec67
649aeb0
cb93cef
 
e187898
 
 
cb93cef
e187898
cb93cef
 
 
e187898
 
 
cb93cef
 
 
5eb33f4
2b35c15
 
371cca4
458f3b0
2b35c15
 
32d34c3
2b35c15
 
 
995dd4e
 
2b35c15
 
 
 
 
 
 
 
 
 
 
 
 
a549a6a
 
 
 
 
 
 
 
 
 
 
 
f11d7ab
b6d7675
f11d7ab
 
 
 
2b35c15
 
 
 
a549a6a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
tags:
- generated_from_trainer
model-index:
- name: arzRoBERTa
  results: []
metrics:
- perplexity
license: mit
datasets:
- SaiedAlshahrani/Egyptian_Arabic_Wikipedia_20230101
- SaiedAlshahrani/MASD
language:
- ar
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: الهدف من الحياة هو  <mask>
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Egyptian Arabic Wikipedia (arzRoBERTa<sub>BASE</sub>)

This arzRoBERTa<sub>BASE</sub> model has been trained *from scratch* on the Egyptian Arabic Wikipedia articles, downloaded on the 1st of January 2023, processed using
`Gensim` Python library, preprocessed using `tr` Linux/Unix utility and `CAMeLTools` Python toolkit for Arabic NLP, and hosted here at [SaiedAlshahrani/Egyptian_Arabic_Wikipedia_20230101](https://huggingface.co./datasets/SaiedAlshahrani/Egyptian_Arabic_Wikipedia_20230101).
It achieves the following results on the evaluation set:

- Pseudo-Perplexity: 115.80

## Model description

We trained this Egyptian Arabic Wikipedia Masked Language Model (arzRoBERTa<sub>BASE</sub>) to evaluate its performance using the Fill-Mask evaluation task and the Masked Arab States Dataset ([MASD](https://huggingface.co./datasets/SaiedAlshahrani/MASD)) dataset and measure the *impact* of **template-based translation** on the Egyptian Arabic Wikipedia edition.

For more details about the experiment, please **read** and **cite** our paper:

```bash
@inproceedings{alshahrani-etal-2023-performance,
    title = "{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}",
    author = "Alshahrani, Saied  and Alshahrani, Norah  and Dey, Soumyabrata  and Matthews, Jeanna",
    booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
    month = December,
    year = "2023",
    address = "Singapore (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.arabicnlp-1.19",
    doi = "10.18653/v1/2023.arabicnlp-1.19",
    pages = "218--231",
    abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
}
```

## Intended uses & limitations

We do **not** recommend using this model because it was trained *only* on the Egyptian Arabic Wikipedia articles, which are known by the template-based translation from English, producing limited, shallow, and unrepresentative articles, <u>unless</u> you fine-tune the model on a large, organic, and representative Egyptian dataset.

## Training and evaluation data

We have trained this model on the Egyptian Arabic Wikipedia articles ([SaiedAlshahrani/Egyptian_Arabic_Wikipedia_20230101](https://huggingface.co./datasets/SaiedAlshahrani/Egyptian_Arabic_Wikipedia_20230101)) without using any validation or evaluation data (only training data) due to a lack of computational power.

## Training procedure

We have trained this model using the Paperspace GPU-Cloud service. We used a machine with 8 CPUs, 45GB RAM, and A6000 GPU with 48GB RAM.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5

### Training results

| Epoch | Step  | Training Loss | 
|:-----:|:-----:|:-------------:|
| 1     | 2500  | 2.038300      | 
| 2     | 5000  | 0.878800      | 
| 3     | 7500  | 0.682800      | 
| 4     | 10000 | 0.613100      |
| 5     | 12500 | 0.574500      | 

| Train Runtime  | Train Samples Per Second | Train Steps Per Second | Total Flos                | Train Loss |  Epoch   |
|:--------------:|:------------------------:|:----------------------:|:-------------------------:|:----------:|:--------:|
| 14677.117400   | 248.119000               | 0.970000               | 120746231839334400.000000 | 0.908513   | 5.000000 |

### Evaluation results
This arzRoBERTa<sub>BASE</sub> model has been evaluated on the Masked Arab States Dataset ([SaiedAlshahrani/MASD](https://huggingface.co./datasets/SaiedAlshahrani/MASD)).
| K=10 | K=50  | K=100 | 
|:----:|:-----:|:----:|
| 8.12%| 25.62% | 35% | 
 
### Framework versions

- Datasets 2.9.0
- Tokenizers 0.12.1
- Transformers 4.24.0
- Pytorch 1.12.1+cu116