Dataset Viewer issue: DatasetGenerationCastError
#4
by
brightening-eyes
- opened
The dataset viewer is not working.
Error details:
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 missing columns ({'images'})
This happened while the json dataset builder was generating data using
gzip://2024-04.jsonl::hf://datasets/Targoman/TLPC@54f900330a79256abbec2e8f6a849b1ae81b664c/aa/2024-04.jsonl.gz
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
url: string
category: struct<original: string, textType: string, major: string>
child 0, original: string
child 1, textType: string
child 2, major: string
date: timestamp[s]
title: string
subtitle: string
content: list<item: struct<type: string, text: string>>
child 0, item: struct<type: string, text: string>
child 0, type: string
child 1, text: string
tags: list<item: string>
child 0, item: string
to
{'url': Value(dtype='string', id=None), 'category': {'original': Value(dtype='string', id=None), 'textType': Value(dtype='string', id=None), 'major': Value(dtype='string', id=None)}, 'date': Value(dtype='timestamp[s]', id=None), 'title': Value(dtype='string', id=None), 'subtitle': Value(dtype='string', id=None), 'content': [{'type': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'ref': Value(dtype='string', id=None)}], 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'images': [{'src': Value(dtype='string', id=None)}]}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1100, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 missing columns ({'images'})
This happened while the json dataset builder was generating data using
gzip://2024-04.jsonl::hf://datasets/Targoman/TLPC@54f900330a79256abbec2e8f6a849b1ae81b664c/aa/2024-04.jsonl.gz
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
سلام. این خطا موقعی که من داشتم tlpc رو تو dataset viewer خود hf میدیدم بود. یکی از فایلای gz مثل اینکه با یه column که images باشه مشکل داره. (البته اگه دیتاست کلا به parquet تبدیل بشه هم از لحاض حجم بهتر میشه هم بهتره که column ها کلا یکی بشه تا این مشکلات به وجود نیاد).
According to Data Structure section, most data fields are optional. Not all datasets have the same columns, so you can not use the load_dataset function to load all the data. You must use multiple configurations or preprocess JSON files and convert them to text or CSV format.
ziabary
changed discussion status to
closed