parquet-converter commited on
Commit
82dfafe
·
1 Parent(s): d38d3f4

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,169 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- - crowdsourced
5
- - found
6
- language_creators:
7
- - machine-generated
8
- - crowdsourced
9
- language: []
10
- license:
11
- - mit
12
- multilinguality:
13
- - monolingual
14
- size_categories:
15
- - 10K<n<100K
16
- source_datasets:
17
- - original
18
- - extended|squad
19
- - extended|race
20
- - extended|newsqa
21
- - extended|qamr
22
- - extended|movieQA
23
- task_categories:
24
- - text2text-generation
25
- task_ids:
26
- - text-simplification
27
- pretty_name: QA2D
28
- ---
29
-
30
- # Dataset Card for QA2D
31
-
32
- ## Table of Contents
33
- - [Dataset Description](#dataset-description)
34
- - [Dataset Summary](#dataset-summary)
35
- - [Supported Tasks](#supported-tasks-and-leaderboards)
36
- - [Languages](#languages)
37
- - [Dataset Structure](#dataset-structure)
38
- - [Data Instances](#data-instances)
39
- - [Data Fields](#data-instances)
40
- - [Data Splits](#data-instances)
41
- - [Dataset Creation](#dataset-creation)
42
- - [Curation Rationale](#curation-rationale)
43
- - [Source Data](#source-data)
44
- - [Annotations](#annotations)
45
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
- - [Considerations for Using the Data](#considerations-for-using-the-data)
47
- - [Social Impact of Dataset](#social-impact-of-dataset)
48
- - [Discussion of Biases](#discussion-of-biases)
49
- - [Other Known Limitations](#other-known-limitations)
50
- - [Additional Information](#additional-information)
51
- - [Dataset Curators](#dataset-curators)
52
- - [Licensing Information](#licensing-information)
53
- - [Citation Information](#citation-information)
54
-
55
- ## Dataset Description
56
-
57
- - **Homepage:** https://worksheets.codalab.org/worksheets/0xd4ebc52cebb84130a07cbfe81597aaf0/
58
- - **Repository:** https://github.com/kelvinguu/qanli
59
- - **Paper:** https://arxiv.org/abs/1809.02922
60
- - **Leaderboard:** [Needs More Information]
61
- - **Point of Contact:** [Needs More Information]
62
-
63
- ### Dataset Summary
64
-
65
- Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.
66
-
67
- This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.
68
-
69
- ### Supported Tasks and Leaderboards
70
-
71
- [Needs More Information]
72
-
73
- ### Languages
74
-
75
- en
76
-
77
- ## Dataset Structure
78
-
79
- ### Data Instances
80
-
81
- See below.
82
-
83
- ### Data Fields
84
-
85
- - `dataset`: lowercased name of dataset (movieqa, newsqa, qamr, race, squad)
86
- - `example_uid`: unique id of example within dataset (there are examples with the same uids from different datasets, so the combination of dataset + example_uid should be used for unique indexing)
87
- - `question`: tokenized (space-separated) question from the source QA dataset
88
- - `answer`: tokenized (space-separated) answer span from the source QA dataset
89
- - `turker_answer`: tokenized (space-separated) answer sentence collected from MTurk
90
- - `rule-based`: tokenized (space-separated) answer sentence, generated by the rule-based model
91
-
92
- ### Data Splits
93
- | Dataset Split | Number of Instances in Split |
94
- | ------------- |----------------------------- |
95
- | Train | 60,710 |
96
- | Dev | 10,344 |
97
-
98
- ## Dataset Creation
99
-
100
- ### Curation Rationale
101
-
102
- This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.
103
-
104
- ### Source Data
105
-
106
- #### Initial Data Collection and Normalization
107
-
108
- [Needs More Information]
109
-
110
- #### Who are the source language producers?
111
-
112
- [Needs More Information]
113
-
114
- ### Annotations
115
-
116
- #### Annotation process
117
-
118
- [Needs More Information]
119
-
120
- #### Who are the annotators?
121
-
122
- [Needs More Information]
123
-
124
- ### Personal and Sensitive Information
125
-
126
- [Needs More Information]
127
-
128
- ## Considerations for Using the Data
129
-
130
- ### Social Impact of Dataset
131
-
132
- [Needs More Information]
133
-
134
- ### Discussion of Biases
135
-
136
- [Needs More Information]
137
-
138
- ### Other Known Limitations
139
-
140
- [Needs More Information]
141
-
142
- ## Additional Information
143
-
144
- ### Dataset Curators
145
-
146
- [Needs More Information]
147
-
148
- ### Licensing Information
149
-
150
- [Needs More Information]
151
-
152
- ### Citation Information
153
-
154
- @article{DBLP:journals/corr/abs-1809-02922,
155
- author = {Dorottya Demszky and
156
- Kelvin Guu and
157
- Percy Liang},
158
- title = {Transforming Question Answering Datasets Into Natural Language Inference
159
- Datasets},
160
- journal = {CoRR},
161
- volume = {abs/1809.02922},
162
- year = {2018},
163
- url = {http://arxiv.org/abs/1809.02922},
164
- eprinttype = {arXiv},
165
- eprint = {1809.02922},
166
- timestamp = {Fri, 05 Oct 2018 11:34:52 +0200},
167
- biburl = {https://dblp.org/rec/journals/corr/abs-1809-02922.bib},
168
- bibsource = {dblp computer science bibliography, https://dblp.org}
169
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"domenicrosati--QA2D": {"description": "", "citation": "", "homepage": "", "license": "", "features": {"dataset": {"dtype": "string", "id": null, "_type": "Value"}, "example_uid": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "turker_answer": {"dtype": "string", "id": null, "_type": "Value"}, "rule-based": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "csv", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 16624765, "num_examples": 60710, "dataset_name": "QA2D"}, "dev": {"name": "dev", "num_bytes": 2836900, "num_examples": 10344, "dataset_name": "QA2D"}}, "download_checksums": null, "download_size": 13131892, "post_processing_size": null, "dataset_size": 19461665, "size_in_bytes": 32593557}}
 
 
data/dev-00000-of-00001.parquet → domenicrosati--QA2D/parquet-dev.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd8a2843cd08410b96b49692aab1b817c6444aa46a0dee1a5e0d0511d923da85
3
- size 1870199
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46db2b7194b9e2335caa01e626e2616b9eca061792680c5acee95002b3020ba4
3
+ size 1919109
data/train-00000-of-00001.parquet → domenicrosati--QA2D/parquet-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3fce9846e4ad2a409b548b39e5e333be34211452b066cd6bdaf7fd942e06fcad
3
- size 11261693
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c289b9e25325d6d721f18fe0d2eb8eaa3453f92fc063f3490b2102be3bcdda0
3
+ size 11467638