Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
parquet-converter commited on
Commit
9342c4b
1 Parent(s): 77859ef

Update parquet files

Browse files
README.md DELETED
@@ -1,154 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - en
6
- license:
7
- - cc-by-4.0
8
- multilinguality:
9
- - monolingual
10
- task_categories:
11
- - text-classification
12
- task_ids: []
13
- pretty_name: RedditQG
14
- ---
15
-
16
-
17
- # Dataset Card for RedditQG
18
-
19
- ## Table of Contents
20
- - [Table of Contents](#table-of-contents)
21
- - [Dataset Description](#dataset-description)
22
- - [Dataset Summary](#dataset-summary)
23
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
24
- - [Languages](#languages)
25
- - [Dataset Structure](#dataset-structure)
26
- - [Data Instances](#data-instances)
27
- - [Data Fields](#data-fields)
28
- - [Data Splits](#data-splits)
29
- - [Dataset Creation](#dataset-creation)
30
- - [Curation Rationale](#curation-rationale)
31
- - [Source Data](#source-data)
32
- - [Annotations](#annotations)
33
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
34
- - [Considerations for Using the Data](#considerations-for-using-the-data)
35
- - [Social Impact of Dataset](#social-impact-of-dataset)
36
- - [Discussion of Biases](#discussion-of-biases)
37
- - [Other Known Limitations](#other-known-limitations)
38
- - [Additional Information](#additional-information)
39
- - [Dataset Curators](#dataset-curators)
40
- - [Licensing Information](#licensing-information)
41
- - [Citation Information](#citation-information)
42
- - [Contributions](#contributions)
43
-
44
- ## Dataset Description
45
-
46
- - **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/)
47
- - **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology)
48
- - **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/)
49
- - **Leaderboard:** [Needs More Information]
50
- - **Point of Contact:** [Needs More Information]
51
-
52
- ### Dataset Summary
53
-
54
- This dataset contains answer-question pairs from QA communities of Reddit.
55
-
56
- ### Supported Tasks and Leaderboards
57
-
58
- [More Information Needed]
59
-
60
- ### Languages
61
-
62
- English
63
-
64
- ## Dataset Structure
65
-
66
- ### Data Instances
67
-
68
- An example looks as follows.
69
- ```
70
- {
71
- "id": "askscience/123",
72
- "qid": "2323",
73
- "answer": "A test answer.",
74
- "question": "A test question?",
75
- "score": 20
76
- }
77
- ```
78
-
79
- ### Data Fields
80
-
81
- - `id`: a `string` feature.
82
- - `qid`: a `string` feature. There could be multiple answers to the same question.
83
- - `answer`: a `string` feature.
84
- - `question`: a `string` feature.
85
- - `score`: an `int` feature which is the value of `upvotes - downvotes`.
86
-
87
- ### Data Splits
88
-
89
- - train: 647763
90
- - valid: 36023
91
- - test: 36202
92
-
93
- ## Dataset Creation
94
-
95
- ### Curation Rationale
96
-
97
- [More Information Needed]
98
-
99
- ### Source Data
100
-
101
- #### Initial Data Collection and Normalization
102
-
103
- [More Information Needed]
104
-
105
- #### Who are the source language producers?
106
-
107
- Reddit users.
108
-
109
- ### Personal and Sensitive Information
110
-
111
- Samples with abusive words are discarded, but there could be samples containing personal information.
112
-
113
- ## Considerations for Using the Data
114
-
115
- ### Social Impact of Dataset
116
-
117
- [More Information Needed]
118
-
119
- ### Discussion of Biases
120
-
121
- [More Information Needed]
122
-
123
- ### Other Known Limitations
124
-
125
- [More Information Needed]
126
-
127
- ## Additional Information
128
-
129
- ### Dataset Curators
130
-
131
- [More Information Needed]
132
-
133
- ### Licensing Information
134
-
135
- CC BY 4.0
136
-
137
- ### Citation Information
138
-
139
- ```
140
- @inproceedings{cao-wang-2021-controllable,
141
- title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
142
- author = "Cao, Shuyang and
143
- Wang, Lu",
144
- booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
145
- month = aug,
146
- year = "2021",
147
- address = "Online",
148
- publisher = "Association for Computational Linguistics",
149
- url = "https://aclanthology.org/2021.acl-long.502",
150
- doi = "10.18653/v1/2021.acl-long.502",
151
- pages = "6424--6439",
152
- abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
153
- }
154
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/test.jsonl → default/reddit_qg-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d3643d3e0a6604d7590e59df2fb8af6f1f0d1ef7b5f94d5cdd1713cce9e29467
3
- size 26200884
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49a77a387aa77360c46a2ca67fb5b434c31f2b148df41c0316622d3d34bd9e43
3
+ size 15957190
data/train.jsonl → default/reddit_qg-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f95c6733daacb68c115fa7a28c6732f36ad73730d7c107f53fe66cb019d8fe69
3
- size 469436698
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccffe8abb5a03fade9fa1b5863e7f2e553cd8e140c5d6519e65e90cb992e2edf
3
+ size 285425933
data/valid.jsonl → default/reddit_qg-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:07fb2acfed2ef62b3e6e0889c7c17774a917d8a3d161edf72979a99f17b0e77e
3
- size 26137417
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5800ed84de8aa097286978dea88a4be64f8784df20287cbaba1bf721e1fc2cfc
3
+ size 15905563
reddit_qg.py DELETED
@@ -1,78 +0,0 @@
1
- """RedditQG: Reddit Question Generation Dataset."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
-
8
-
9
- logger = datasets.logging.get_logger(__name__)
10
-
11
-
12
- _CITATION = """\
13
- @inproceedings{cao-wang-2021-controllable,
14
- title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
15
- author = "Cao, Shuyang and
16
- Wang, Lu",
17
- booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
18
- month = aug,
19
- year = "2021",
20
- address = "Online",
21
- publisher = "Association for Computational Linguistics",
22
- url = "https://aclanthology.org/2021.acl-long.502",
23
- doi = "10.18653/v1/2021.acl-long.502",
24
- pages = "6424--6439",
25
- abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
26
- }
27
- """
28
-
29
- _DESCRIPTION = """\
30
- Reddit question generation dataset.
31
- """
32
-
33
- _URL = "https://huggingface.co/datasets/launch/reddit_qg/resolve/main/data/"
34
- _URLS = {
35
- "train": _URL + "train.jsonl",
36
- "valid": _URL + "valid.jsonl",
37
- "test": _URL + "test.jsonl",
38
- }
39
-
40
-
41
- class RedditQG(datasets.GeneratorBasedBuilder):
42
- VERSION = datasets.Version("1.0.0")
43
-
44
- def _info(self):
45
- features = datasets.Features(
46
- {
47
- "id": datasets.Value("string"),
48
- "qid": datasets.Value("string"),
49
- "question": datasets.Value("string"),
50
- "answer": datasets.Value("string"),
51
- "score": datasets.Value("int32")
52
- }
53
- )
54
- return datasets.DatasetInfo(
55
- description=_DESCRIPTION,
56
- features=features,
57
- supervised_keys=("answer", "question"),
58
- homepage="",
59
- citation=_CITATION,
60
- )
61
-
62
- def _split_generators(self, dl_manager):
63
- downloaded_files = dl_manager.download_and_extract(_URLS)
64
-
65
- return [
66
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"split_file": downloaded_files["train"]}),
67
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"split_file": downloaded_files["valid"]}),
68
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"split_file": downloaded_files["test"]}),
69
- ]
70
-
71
- def _generate_examples(self, split_file):
72
- """This function returns the examples in the raw (text) form."""
73
- logger.info(f"generating examples from = {split_file}")
74
-
75
- with open(split_file) as f:
76
- for line in f:
77
- data = json.loads(line)
78
- yield data["id"], data