Dataset Viewer (First 5GB)
Full Screen Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'iriisnepal_u_nepberta_512' of the dataset.
Error code: FeaturesError Exception: UnicodeDecodeError Message: 'utf-8' codec can't decode byte 0xd2 in position 7: invalid continuation byte Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables batch = f.read(self.config.chunksize) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries out = read(*args, **kwargs) File "/usr/local/lib/python3.9/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd2 in position 7: invalid continuation byte
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Nepali LLM Datasets
This repository contains two configurations of Nepali LLM datasets:
Configurations
1. Scrapy Engine
- Description: Contains data collected using a web scraping engine.
- Files: [List any specific files or formats]
2. Nepberta
- Description: This dataset is derived from the Nepberta project and contains cleaned data specifically related to the project. The dataset contains **cleaned text chunks of size ~50 mb ** of all articles into a single giant string, with each article ending in <|endoftext|>. This long string is then segmented into chunks, each approximately 500 MB in size.
- Files: contains 23 files each ~500Mb (chunk_1.txt, chunk_2.txt, ... chunk_23.txt)
- split:train
- files: chunk_1.txt to chunk_18.txt
- split:test
- files: chunk_19.txt to chunk_23.txt
Usage
To load the datasets:
# it loads entire dataset first
from datasets import load_dataset
# Load nepberta configuration
nepberta_dataset = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split='train') # use `streaming=True` to avoid downloading all the dataset
# length of chunks
len(nepberta_train['text']) # 18 : number of chunks
len(nepberta_train['text'][0]) # length of large text equivalent to 500 MB text
# use streaming=True to avoid downloading entire dataset
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", streaming=True)['train']
# using next
next_text_chunk = next(iter(nepberta_train))
print(len(next_text_chunk['text']))
# using for loop
for large_chunk in nepberta_train:
print(len(large_chunk['text']))
break
# code to process large_chunk['text']
# Load scrapy engine data
scrapy_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine" split="train")
- Downloads last month
- 223