Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 17 fields in line 7, saw 25

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 195, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 17 fields in line 7, saw 25

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Google WIT Vietnamese

This data repos contain extracted data from Google WIT. The extracted data is all for Vietnamese language.

Given x is a data point in the OG dataset which has keys following OG field_name, the criteria to filter is

criteria = lambda x: x.get("language", "") == "vi" and x.get("caption_reference_description", "")

Text-related details

All .tsv.gz files follow OG data files in terms of file names and file structures.

Train split

wit_v1.train.*.tsv.gz

Train data length of each file (not including the header),

17690
17756
17810
17724
17619
17494
17624
17696
17777
17562

Total 176752

Validation split

wit_v1.val.*.tsv.gz

Val data length of each file (not including the header),

292
273
275
320
306

Total 1466

Test split

wit_v1.test.*.tsv.gz

Test data length of each file (not including the header),

215
202
201
201
229

Total 1048

Image-related details

Image URL only

*.image_url_list.txt are simply lists of image urls from *.tsv.gz files

Image url length of each file (train, val, test, all)

157281
1271
900
159452

Google Research has made sure that all sets don't share same exact images.

Downloaded Images

⚠ Please for the love of the gods, read this section carefully.

For all.index.fmt_id.image_url_list.tsv, from left to right, without headers, the columns are index, fmt_id, image_url. It is to map image_url (in all.image_url_list.txt) to fmt_id. It's for downloading images.

fmt_id is:

  • used to name images (with proper image extensions) in images/.
  • index but filled with 6 zeros

Downloading time was less than 36 hours with:

  • 90 Mbps
  • Processor Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz 1.99 GHz
  • No asynchronous

For fail.index.fmt_id.status.image_url_list.tsv, from left to right, without headers, the columns are index, fmt_id, status, image_url. It is to track image urls (during downloading) that are inaccessible.

3367 image urls returned 404 (status values). In other words, we were able to download 97.88839275% of images.

images/ folder takes disk space of:

  • 215 GBs (uncompressed)
  • 209 GBs (compressed)

We use Pillow to open each image to make sure that downloaded images are usable. We also log all faulty files in corrupted_image_list.json. There are less than 70 image files.

For corrupted_image_list.json, for each item in this list, the keys are file_name, error. file_name is fmt_id with extension but without images/. Some errors are either:

  • files exceed Pillow default limit
  • files are truncated

To actually load those files, the following code can be used to change Pillow behavior

from PIL import Image, ImageFile

# For very big image files
Image.MAX_IMAGE_PIXELS = None

# For truncated image files
ImageFile.LOAD_TRUNCATED_IMAGES = True

Zip images/ folder,

zip -r images.zip images/
zip images.zip --out spanned_images.zip -s 40g

https://superuser.com/questions/336219/how-do-i-split-a-zip-file-into-multiple-segments

Unzip spanned_images.* files,

zip -s 0 spanned_images.zip --out images.zip
unzip images.zip

https://unix.stackexchange.com/questions/40480/how-to-unzip-a-multipart-spanned-zip-on-linux

Downloads last month
151