Error occurs when loading dataset
After running the following code as suggested in the dataset card:
from datasets import load_dataset
dataset = load_dataset("Targoman/TLPC")
I get the errors below. How do I fix this?
Resolving data files: 100%
41778/41778 [00:00<00:00, 210065.58it/s]
Downloading data: 100%
41273/41273 [00:06<00:00, 4116.94files/s]
CastError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
2010 try:
-> 2011 writer.write_table(table)
2012 except CastError as cast_error:
8 frames
CastError: Couldn't cast
url: string
category: struct<original: string, textType: string, major: string>
child 0, original: string
child 1, textType: string
child 2, major: string
date: timestamp[s]
title: string
subtitle: string
content: list<item: struct<type: string, text: string>>
child 0, item: struct<type: string, text: string>
child 0, type: string
child 1, text: string
tags: list<item: string>
child 0, item: string
to
{'url': Value(dtype='string', id=None), 'category': {'original': Value(dtype='string', id=None), 'textType': Value(dtype='string', id=None), 'major': Value(dtype='string', id=None)}, 'date': Value(dtype='timestamp[s]', id=None), 'title': Value(dtype='string', id=None), 'subtitle': Value(dtype='string', id=None), 'content': [{'type': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'ref': Value(dtype='string', id=None)}], 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'images': [{'src': Value(dtype='string', id=None)}]}
because column names don't match
During handling of the above exception, another exception occurred:
DatasetGenerationCastError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
2011 writer.write_table(table)
2012 except CastError as cast_error:
-> 2013 raise DatasetGenerationCastError.from_cast_error(
2014 cast_error=cast_error,
2015 builder_name=self.info.builder_name,
DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 missing columns ({'images'})
This happened while the json dataset builder was generating data using
/root/.cache/huggingface/datasets/downloads/7e25fd44c49b0749bb62f0042edfb3d305d3e78c3d4e9fc88bcd88ceee7cc802
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Generating train split:
171/0 [00:00<00:00, 3125.17 examples/s]
According to Data Structure section, most of the data fields are optional. Not all datasets have same columns, so you can not use load_dataset function to load the whole data and must use multiple configurations or preprocess JSON files and convert them to text or CSV format.