Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
Libraries:
Datasets
pandas
License:
albertvillanova HF staff commited on
Commit
28e91a2
1 Parent(s): 92d5052

Convert dataset to Parquet (#2)

Browse files

- Convert dataset to Parquet (d10d2fe422209dee24e62895b693261fbbdd6b33)
- Add mixed data files (48d5f7848a9237f5e3684ae120ae2cbd385f929e)
- Delete loading script (7aa834eb963987f401d634363819183729d026cb)
- Delete legacy dataset_infos.json (b88dac81a26d715e7ce8739e70e656db5d7633a4)

README.md CHANGED
@@ -20,7 +20,7 @@ task_ids:
20
  paperswithcode_id: c3
21
  pretty_name: C3
22
  dataset_info:
23
- - config_name: mixed
24
  features:
25
  - name: documents
26
  sequence: string
@@ -36,17 +36,17 @@ dataset_info:
36
  sequence: string
37
  splits:
38
  - name: train
39
- num_bytes: 2710513
40
- num_examples: 3138
41
  - name: test
42
- num_bytes: 891619
43
- num_examples: 1045
44
  - name: validation
45
- num_bytes: 910799
46
- num_examples: 1046
47
- download_size: 5481785
48
- dataset_size: 4512931
49
- - config_name: dialog
50
  features:
51
  - name: documents
52
  sequence: string
@@ -62,16 +62,33 @@ dataset_info:
62
  sequence: string
63
  splits:
64
  - name: train
65
- num_bytes: 2039819
66
- num_examples: 4885
67
  - name: test
68
- num_bytes: 646995
69
- num_examples: 1627
70
  - name: validation
71
- num_bytes: 611146
72
- num_examples: 1628
73
- download_size: 4352392
74
- dataset_size: 3297960
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  ---
76
  # Dataset Card for C3
77
 
 
20
  paperswithcode_id: c3
21
  pretty_name: C3
22
  dataset_info:
23
+ - config_name: dialog
24
  features:
25
  - name: documents
26
  sequence: string
 
36
  sequence: string
37
  splits:
38
  - name: train
39
+ num_bytes: 2039779
40
+ num_examples: 4885
41
  - name: test
42
+ num_bytes: 646955
43
+ num_examples: 1627
44
  - name: validation
45
+ num_bytes: 611106
46
+ num_examples: 1628
47
+ download_size: 2073256
48
+ dataset_size: 3297840
49
+ - config_name: mixed
50
  features:
51
  - name: documents
52
  sequence: string
 
62
  sequence: string
63
  splits:
64
  - name: train
65
+ num_bytes: 2710473
66
+ num_examples: 3138
67
  - name: test
68
+ num_bytes: 891579
69
+ num_examples: 1045
70
  - name: validation
71
+ num_bytes: 910759
72
+ num_examples: 1046
73
+ download_size: 3183780
74
+ dataset_size: 4512811
75
+ configs:
76
+ - config_name: dialog
77
+ data_files:
78
+ - split: train
79
+ path: dialog/train-*
80
+ - split: test
81
+ path: dialog/test-*
82
+ - split: validation
83
+ path: dialog/validation-*
84
+ - config_name: mixed
85
+ data_files:
86
+ - split: train
87
+ path: mixed/train-*
88
+ - split: test
89
+ path: mixed/test-*
90
+ - split: validation
91
+ path: mixed/validation-*
92
  ---
93
  # Dataset Card for C3
94
 
c3.py DELETED
@@ -1,149 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """C3 Parallel Corpora"""
16
-
17
-
18
- import json
19
-
20
- import datasets
21
-
22
-
23
- _CITATION = """\
24
- @article{sun2019investigating,
25
- title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension},
26
- author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire},
27
- journal={Transactions of the Association for Computational Linguistics},
28
- year={2020},
29
- url={https://arxiv.org/abs/1904.09679v3}
30
- }
31
- """
32
-
33
- _DESCRIPTION = """\
34
- Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.
35
- We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.
36
- """
37
-
38
- _URL = "https://raw.githubusercontent.com/nlpdata/c3/master/data/"
39
-
40
-
41
- class C3Config(datasets.BuilderConfig):
42
- """BuilderConfig for NewDataset"""
43
-
44
- def __init__(self, type_, **kwargs):
45
- """
46
-
47
- Args:
48
- pair: the language pair to consider
49
- zip_file: The location of zip file containing original data
50
- **kwargs: keyword arguments forwarded to super.
51
- """
52
- self.type_ = type_
53
- super().__init__(**kwargs)
54
-
55
-
56
- class C3(datasets.GeneratorBasedBuilder):
57
- """C3 is the first free-form multiple-Choice Chinese machine reading Comprehension dataset, containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second language examinations."""
58
-
59
- VERSION = datasets.Version("1.0.0")
60
-
61
- # This is an example of a dataset with multiple configurations.
62
- # If you don't want/need to define several sub-sets in your dataset,
63
- # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
64
- BUILDER_CONFIG_CLASS = C3Config
65
- BUILDER_CONFIGS = [
66
- C3Config(
67
- name="mixed",
68
- description="Mixed genre questions",
69
- version=datasets.Version("1.0.0"),
70
- type_="mixed",
71
- ),
72
- C3Config(
73
- name="dialog",
74
- description="Dialog questions",
75
- version=datasets.Version("1.0.0"),
76
- type_="dialog",
77
- ),
78
- ]
79
-
80
- def _info(self):
81
- return datasets.DatasetInfo(
82
- # This is the description that will appear on the datasets page.
83
- description=_DESCRIPTION,
84
- # datasets.features.FeatureConnectors
85
- features=datasets.Features(
86
- {
87
- "documents": datasets.Sequence(datasets.Value("string")),
88
- "document_id": datasets.Value("string"),
89
- "questions": datasets.Sequence(
90
- {
91
- "question": datasets.Value("string"),
92
- "answer": datasets.Value("string"),
93
- "choice": datasets.Sequence(datasets.Value("string")),
94
- }
95
- ),
96
- }
97
- ),
98
- # If there's a common (input, target) tuple from the features,
99
- # specify them here. They'll be used if as_supervised=True in
100
- # builder.as_dataset.
101
- supervised_keys=None,
102
- # Homepage of the dataset for documentation
103
- homepage="https://github.com/nlpdata/c3",
104
- citation=_CITATION,
105
- )
106
-
107
- def _split_generators(self, dl_manager):
108
- # m or d
109
- T = self.config.type_[0]
110
- files = [_URL + f"c3-{T}-{split}.json" for split in ["train", "test", "dev"]]
111
- dl_dir = dl_manager.download_and_extract(files)
112
-
113
- return [
114
- datasets.SplitGenerator(
115
- name=datasets.Split.TRAIN,
116
- # These kwargs will be passed to _generate_examples
117
- gen_kwargs={
118
- "filename": dl_dir[0],
119
- "split": "train",
120
- },
121
- ),
122
- datasets.SplitGenerator(
123
- name=datasets.Split.TEST,
124
- # These kwargs will be passed to _generate_examples
125
- gen_kwargs={
126
- "filename": dl_dir[1],
127
- "split": "test",
128
- },
129
- ),
130
- datasets.SplitGenerator(
131
- name=datasets.Split.VALIDATION,
132
- # These kwargs will be passed to _generate_examples
133
- gen_kwargs={
134
- "filename": dl_dir[2],
135
- "split": "dev",
136
- },
137
- ),
138
- ]
139
-
140
- def _generate_examples(self, filename, split):
141
- """Yields examples."""
142
- with open(filename, "r", encoding="utf-8") as sf:
143
- data = json.load(sf)
144
- for id_, (documents, questions, document_id) in enumerate(data):
145
- yield id_, {
146
- "documents": documents,
147
- "questions": questions,
148
- "document_id": document_id,
149
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"mixed": {"description": "Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.\nWe present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.\n", "citation": "@article{sun2019investigating,\n title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension},\n author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire},\n journal={Transactions of the Association for Computational Linguistics},\n year={2020},\n url={https://arxiv.org/abs/1904.09679v3}\n}\n", "homepage": "https://github.com/nlpdata/c3", "license": "", "features": {"documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "document_id": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "choice": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "c3", "config_name": "mixed", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2710513, "num_examples": 3138, "dataset_name": "c3"}, "test": {"name": "test", "num_bytes": 891619, "num_examples": 1045, "dataset_name": "c3"}, "validation": {"name": "validation", "num_bytes": 910799, "num_examples": 1046, "dataset_name": "c3"}}, "download_checksums": {"https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-m-train.json": {"num_bytes": 3292571, "checksum": "4c84a534f1eec2c72e5f60f0c044cc39e2e42a88df01134e677e03217472d6af"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-m-test.json": {"num_bytes": 1085489, "checksum": "7d8074be56cf574536a3284bc2d6b04d137694d5e5f5b1368143c0cf3e336822"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-m-dev.json": {"num_bytes": 1103725, "checksum": "357d0d8d2a29bc845cbe50e048c263629f5e527b70f24c3e0838c387c8d3cb54"}}, "download_size": 5481785, "post_processing_size": null, "dataset_size": 4512931, "size_in_bytes": 9994716}, "dialog": {"description": "Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.\nWe present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.\n", "citation": "@article{sun2019investigating,\n title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension},\n author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire},\n journal={Transactions of the Association for Computational Linguistics},\n year={2020},\n url={https://arxiv.org/abs/1904.09679v3}\n}\n", "homepage": "https://github.com/nlpdata/c3", "license": "", "features": {"documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "document_id": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "choice": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "c3", "config_name": "dialog", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2039819, "num_examples": 4885, "dataset_name": "c3"}, "test": {"name": "test", "num_bytes": 646995, "num_examples": 1627, "dataset_name": "c3"}, "validation": {"name": "validation", "num_bytes": 611146, "num_examples": 1628, "dataset_name": "c3"}}, "download_checksums": {"https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-d-train.json": {"num_bytes": 2683529, "checksum": "baf81f327dee84c6f451c9a4dd662e6193c67473b8791ffb72cce75cdb528f20"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-d-test.json": {"num_bytes": 855404, "checksum": "e9920491b31f9d00ecf31e51727b495dd6b0d05f4a96f273a343e81b6775a8f0"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-d-dev.json": {"num_bytes": 813459, "checksum": "8c7054930a40aeb288ad7c51c42fa93d54aef678ccab29c75d46a7432f4f6278"}}, "download_size": 4352392, "post_processing_size": null, "dataset_size": 3297960, "size_in_bytes": 7650352}}
 
 
dialog/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f967a070707e502bdb8a42d3b49ceb7c2a5aa5c029dc217f5be45320f3858c00
3
+ size 410376
dialog/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abd0860012b5b5ff6246cc4e22326be00c1995b652b72eabfec2824e87735743
3
+ size 1280573
dialog/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a387db4e1337c5855a8d2b05566a7a6334fe58a83eb3db0349e344c65609046
3
+ size 382307
mixed/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba4b6fc08d5f3505a6c6606e3a8792807b8808bce8b6262ec472d6dcb720f5ef
3
+ size 636791
mixed/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72892b1447e1fa1068447ec5a253cea56e196550db6a7b52a916c350319bc6b5
3
+ size 1901402
mixed/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2061eb2e311e0fbad3de5a3c38fc4d948f391d4671907ee14fe1c65a3764828d
3
+ size 645587