Datasets:
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: lat
path: data/lat-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: original
num_bytes: 19244856855
num_examples: 39712
- name: lat
num_bytes: 13705512346
num_examples: 39712
download_size: 16984559355
dataset_size: 32950369201
annotations_creators:
- no-annotation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- monolingual
language:
- uz
size_categories:
- 10M<n<100M
pretty_name: UzBooks
license: apache-2.0
tags:
- uz
- books
Dataset Card for BookCorpus
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://tahrirchi.uz/grammatika-tekshiruvi
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 16.98 GB
- Size of the generated dataset: 32.95 GB
- Total amount of disk used: 49.93 GB
Dataset Summary
In an effort to democratize research on low-resource languages, we release UzBooks dataset, a cleaned book corpus consisting of nearly 40000 books in Uzbek Language divided into two branches: "original" and "lat," representing the OCRed (Latin and Cyrillic) and fully Latin versions of the texts, respectively.
Please refer to our blogpost and paper (Coming soon!) for further details.
To load and use dataset, run this script:
from datasets import load_dataset
uz_books=load_dataset("tahrirchi/uz-books")
Dataset Structure
Data Instances
plain_text
- Size of downloaded dataset files: 16.98 GB
- Size of the generated dataset: 32.95 GB
- Total amount of disk used: 49.93 GB
An example of 'train' looks as follows.
{
"text": "Hamsa\nAlisher Navoiy ..."
}
Data Fields
The data fields are the same among all splits.
plain_text
text
: astring
feature that contains text of the books.
Data Splits
name | |
---|---|
original | 39712 |
lat | 39712 |
Dataset Creation
The books have been crawled from various internet sources and preprocessed using Optical Character Recognition techniques in Tesseract OCR Engine. The latin version is created by converting the original dataset with highly curated scripts in order to put more emphasis on the research and development of the field.
Citation
Please cite this model using the following format:
@online{Mamasaidov2023UzBooks,
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
title = {UzBooks dataset},
year = {2023},
url = {https://huggingface.co./datasets/tahrirchi/uz-books},
note = {Accessed: 2023-10-28}, % change this date
urldate = {2023-10-28} % change this date
}
Gratitude
We are thankful to these awesome organizations and people for helping to make it happen:
- Ilya Gusev: for advise throughout the process
- David Dale: for advise throughout the process
Contacts
We believe that this work will enable and inspire all enthusiasts around the world to open the hidden beauty of low-resource languages, in particular Uzbek.
For further development and issues about the dataset, please use [email protected] or [email protected] to contact.