this dataset is based on the [unofficial copy](https://drive.google.com/drive/folders/1F2wCEfFHzJqX7eTuWhh-pGtrsaHPvTT8?usp=drive_link) ([discussion](https://huggingface.co./datasets/arabic_billion_words/discussions/3)) of the data, and assumes it was downloaded properly. Put the `new_data_*` files to the `./dataset` folder like this: ``` [user@machine /path/to/dataset]$ tree . ├── arabic_billion_words.py ├── dataset │ ├── new_data_Alittihad_XML_utf_8.rar │ ├── new_data_Almasryalyoum_XML_utf_8.rar │ ├── new_data_Almustaqbal_XML_utf_8.rar │ ├── new_data_Alqabas_XML_utf_8.rar │ ├── new_data_Echoroukonline_XML_utf_8.rar │ ├── new_data_Ryiadh_XML_utf_8.rar │ ├── new_data_Sabanews_XML_utf_8.rar │ ├── new_data_SaudiYoum_XML_utf_8.rar │ ├── new_data_Techreen_XML_utf_8.rar │ └── new_data_Youm7_XML_utf_8.rar ├── dataset_infos.json ├── README.md └── usage_example.py ```