Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
Libraries:
Datasets
pandas
License:
c3 / dataset_infos.json
system's picture
system HF staff
Update files from the datasets library (from 1.2.0)
f6c86fe
raw
history blame
6.86 kB
{"mixed": {"description": "Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.\nWe present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.\n", "citation": "@article{sun2019investigating,\n title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension},\n author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire},\n journal={Transactions of the Association for Computational Linguistics},\n year={2020},\n url={https://arxiv.org/abs/1904.09679v3}\n}\n", "homepage": "https://github.com/nlpdata/c3", "license": "", "features": {"documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "document_id": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "choice": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "c3", "config_name": "mixed", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2710513, "num_examples": 3138, "dataset_name": "c3"}, "test": {"name": "test", "num_bytes": 891619, "num_examples": 1045, "dataset_name": "c3"}, "validation": {"name": "validation", "num_bytes": 910799, "num_examples": 1046, "dataset_name": "c3"}}, "download_checksums": {"https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-m-train.json": {"num_bytes": 3292571, "checksum": "4c84a534f1eec2c72e5f60f0c044cc39e2e42a88df01134e677e03217472d6af"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-m-test.json": {"num_bytes": 1085489, "checksum": "7d8074be56cf574536a3284bc2d6b04d137694d5e5f5b1368143c0cf3e336822"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-m-dev.json": {"num_bytes": 1103725, "checksum": "357d0d8d2a29bc845cbe50e048c263629f5e527b70f24c3e0838c387c8d3cb54"}}, "download_size": 5481785, "post_processing_size": null, "dataset_size": 4512931, "size_in_bytes": 9994716}, "dialog": {"description": "Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.\nWe present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.\n", "citation": "@article{sun2019investigating,\n title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension},\n author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire},\n journal={Transactions of the Association for Computational Linguistics},\n year={2020},\n url={https://arxiv.org/abs/1904.09679v3}\n}\n", "homepage": "https://github.com/nlpdata/c3", "license": "", "features": {"documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "document_id": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "choice": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "c3", "config_name": "dialog", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2039819, "num_examples": 4885, "dataset_name": "c3"}, "test": {"name": "test", "num_bytes": 646995, "num_examples": 1627, "dataset_name": "c3"}, "validation": {"name": "validation", "num_bytes": 611146, "num_examples": 1628, "dataset_name": "c3"}}, "download_checksums": {"https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-d-train.json": {"num_bytes": 2683529, "checksum": "baf81f327dee84c6f451c9a4dd662e6193c67473b8791ffb72cce75cdb528f20"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-d-test.json": {"num_bytes": 855404, "checksum": "e9920491b31f9d00ecf31e51727b495dd6b0d05f4a96f273a343e81b6775a8f0"}, "https://raw.githubusercontent.com/nlpdata/c3/master/data/c3-d-dev.json": {"num_bytes": 813459, "checksum": "8c7054930a40aeb288ad7c51c42fa93d54aef678ccab29c75d46a7432f4f6278"}}, "download_size": 4352392, "post_processing_size": null, "dataset_size": 3297960, "size_in_bytes": 7650352}}