The dataset seems broken
Hi
seems it is broken again
Exception has occurred: NonMatchingSplitsSizesError
[{'expected': SplitInfo(name='train', num_bytes=2940852066, num_examples=5761334, shard_lengths=[990000, 990000, 980667, 990000, 990000, 820667], dataset_name='toxi-text-3_m'), 'recorded': SplitInfo(name='train', num_bytes=1470426033, num_examples=2880667, shard_lengths=[990000, 990000, 900667], dataset_name='toxi-text-3_m')}, {'expected': SplitInfo(name='test', num_bytes=58405816, num_examples=127624, shard_lengths=None, dataset_name='toxi-text-3_m'), 'recorded': SplitInfo(name='test', num_bytes=29202908, num_examples=63812, shard_lengths=None, dataset_name='toxi-text-3_m')}]
File "/home/shanxie/dev/temp/run_toxicity.py", line 73, in
dataset = load_dataset("FredZhang7/toxi-text-3M")
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=2940852066, num_examples=5761334, shard_lengths=[990000, 990000, 980667, 990000, 990000, 820667], dataset_name='toxi-text-3_m'), 'recorded': SplitInfo(name='train', num_bytes=1470426033, num_examples=2880667, shard_lengths=[990000, 990000, 900667], dataset_name='toxi-text-3_m')}, {'expected': SplitInfo(name='test', num_bytes=58405816, num_examples=127624, shard_lengths=None, dataset_name='toxi-text-3_m'), 'recorded': SplitInfo(name='test', num_bytes=29202908, num_examples=63812, shard_lengths=None, dataset_name='toxi-text-3_m')}]
Hi, I believe there's an issue with how Hugging Face is counting the number of rows in a CSV.
Please try:
from datasets import load_dataset
ds = load_dataset("FredZhang7/toxi-text-3M", verification_mode="no_checks")