Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
sequence
_format_kwargs
dict
_format_type
null
_indexes
dict
_output_all_columns
bool
_split
null
[ { "filename": "dataset.arrow" } ]
0806f1db0a02927a
[ "feat_Unnamed: 0", "tags", "tokens" ]
{}
null
{}
false
null

AutoTrain Dataset for project: code-mixed-language-identification

Dataset Description

This dataset has been automatically processed by AutoTrain for project code-mixed-language-identification.

Languages

The BCP-47 code for the dataset's language is unk.

Dataset Structure

Data Instances

A sample from this dataset looks as follows:

[
  {
    "feat_Unnamed: 0": 1104,
    "tokens": [
      "@user",
      "salah",
      "satu",
      "dari",
      "4",
      "anak",
      "dr",
      "sunardi",
      "ada",
      "yg",
      "berprofesi",
      "sbg",
      "dokter",
      "juga",
      ",",
      "lulusan",
      "unair",
      ",",
      "sudah",
      "selesai",
      "koas",
      "dan",
      "intern",
      "tolong",
      "disupport",
      "pak",
      "anak",
      "beliau"
    ],
    "tags": [
      6,
      1,
      1,
      1,
      6,
      1,
      6,
      6,
      1,
      1,
      1,
      1,
      1,
      1,
      6,
      1,
      6,
      6,
      1,
      1,
      1,
      1,
      0,
      1,
      3,
      1,
      1,
      1
    ]
  },
  {
    "feat_Unnamed: 0": 239,
    "tokens": [
      "@user",
      "kamu",
      "pake",
      "apa",
      "toh",
      "?",
      "aku",
      "pake",
      "xl",
      "banter",
      "lho",
      "di",
      "apartemen",
      "pun",
      "bisa",
      "download",
      "yutub"
    ],
    "tags": [
      6,
      1,
      1,
      1,
      1,
      6,
      1,
      1,
      6,
      1,
      1,
      1,
      1,
      1,
      1,
      0,
      6
    ]
  }
]

Dataset Fields

The dataset has the following fields (also called "features"):

{
  "feat_Unnamed: 0": "Value(dtype='int64', id=None)",
  "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
  "tags": "Sequence(feature=ClassLabel(names=['EN', 'ID', 'JV', 'MIX_ID_EN', 'MIX_ID_JV', 'MIX_JV_EN', 'OTH'], id=None), length=-1, id=None)"
}

Dataset Splits

This dataset is split into a train and validation split. The split sizes are as follow:

Split name Num samples
train 1105
valid 438
Downloads last month
2