Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2029, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2040, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
null
_format_kwargs
dict
_format_type
null
_output_all_columns
bool
_split
null
[ { "filename": "data-00000-of-00001.arrow" } ]
a1df46296853828f
null
{}
null
false
null
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/datasets-cards)

Dataset Card for Custom Text Dataset

Dataset Name

Custom Summarization Dataset for Text Generation

Overview

이 데이터셋은 문서 요약(Task)을 위한 커스텀 데이터셋이다.

cnn_dailymail 데이터셋의 일부와 사용자가 직접 정의한 예제 데이터로 구성됐다.

문서 본문(sentence)과 해당 요약문(labels)의 pair로 이루어져 있다.

자연어 처리(NLP) 연구에서 요약 모델을 학습하는 데 유용하게 사용할 수 있습니다.

Composition

  • Train 데이터셋: 커스텀 예제 데이터로 구성되어 있으며, 뉴스 기사와 요약문 각각 1개로 이루어져 있다.
  • Test 데이터셋: cnn_dailymail 데이터셋의 일부를 사용하여 구성된 테스트 데이터로, 100개의 뉴스 기사 본문과 그에 대한 요약문을 포함한.
  • 데이터 형식:
    • sentence: 기사 본문 (string)
    • labels: 요약문 (string)

Collection Process

--

Preprocessing

  • 기사 본문과 요약문 모두 string 형식으로 되어 있다. 특수 문자와 공백은 기본적으로 처리되었다.
  • 학습 데이터는 정규화 과정을 거쳤으며, 불필요한 공백 및 특수 문자를 제거하였다.
  • 입력 텍스트의 최대 길이는 별도로 설정하지 않았지만, 모델 학습 시 필요한 경우 최대 길이를 설정할 수 있다.

How to Use

from datasets import load_from_disk

# 저장된 커스텀 데이터셋 로드
train_dataset = load_from_disk("./results/custom_dataset/train")
test_dataset = load_from_disk("./results/custom_dataset/test")

# 데이터 샘플 확인
print("Train Dataset Sample: ", train_dataset["train"][0])
print("Test Dataset Sample: ", test_dataset["test"][0])

# Hugging Face Transformers와 함께 사용
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("t5-small")
encoded_input = tokenizer(train_dataset["train"][0]['sentence'], return_tensors='pt')
print(encoded_input)

Evaluation

--

Limitations

--

Ethical Considerations

--

Downloads last month
3