dhladek commited on
Commit
7f627a9
1 Parent(s): c158430
Files changed (2) hide show
  1. README.md +169 -0
  2. squad-sk.py +132 -0
README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - sk
6
+ language_creators:
7
+ - crowdsourced
8
+ - found
9
+ license:
10
+ - cc-by-sa-4.0
11
+ - cc-by-4.0
12
+ multilinguality:
13
+ - monolingual
14
+ paperswithcode_id: squad-sk
15
+ pretty_name: squad-sk
16
+ size_categories:
17
+ - 10K<n<100K
18
+ source_datasets:
19
+ - original
20
+ tags:
21
+ - wikipedia
22
+ task_categories:
23
+ - question-answering
24
+ - text-retrieval
25
+ task_ids:
26
+ - open-domain-qa
27
+ - extractive-qa
28
+ - document-retrieval
29
+ train-eval-index:
30
+ - col_mapping:
31
+ answers:
32
+ answer_start: answer_start
33
+ text: text
34
+ context: context
35
+ question: question
36
+ config: squad_v2
37
+ metrics:
38
+ - name: SQuAD v2
39
+ type: squad_v2
40
+ splits:
41
+ eval_split: validation
42
+ train_split: train
43
+ task: question-answering
44
+ task_id: extractive_question_answering
45
+ ---
46
+
47
+ # Dataset Card for [Dataset Name]
48
+
49
+ ## Table of Contents
50
+ - [Table of Contents](#table-of-contents)
51
+ - [Dataset Description](#dataset-description)
52
+ - [Dataset Summary](#dataset-summary)
53
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
54
+ - [Languages](#languages)
55
+ - [Dataset Structure](#dataset-structure)
56
+ - [Data Instances](#data-instances)
57
+ - [Data Fields](#data-fields)
58
+ - [Data Splits](#data-splits)
59
+ - [Dataset Creation](#dataset-creation)
60
+ - [Curation Rationale](#curation-rationale)
61
+ - [Source Data](#source-data)
62
+ - [Annotations](#annotations)
63
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
64
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
65
+ - [Social Impact of Dataset](#social-impact-of-dataset)
66
+ - [Discussion of Biases](#discussion-of-biases)
67
+ - [Other Known Limitations](#other-known-limitations)
68
+ - [Additional Information](#additional-information)
69
+ - [Dataset Curators](#dataset-curators)
70
+ - [Licensing Information](#licensing-information)
71
+ - [Citation Information](#citation-information)
72
+ - [Contributions](#contributions)
73
+
74
+ ## Dataset Description
75
+
76
+ - **Homepage:**
77
+ - **Repository:**
78
+ - **Paper:**
79
+ - **Leaderboard:**
80
+ - **Point of Contact:**
81
+
82
+ ### Dataset Summary
83
+
84
+ [More Information Needed]
85
+
86
+ ### Supported Tasks and Leaderboards
87
+
88
+ [More Information Needed]
89
+
90
+ ### Languages
91
+
92
+ [More Information Needed]
93
+
94
+ ## Dataset Structure
95
+
96
+ ### Data Instances
97
+
98
+ [More Information Needed]
99
+
100
+ ### Data Fields
101
+
102
+ [More Information Needed]
103
+
104
+ ### Data Splits
105
+
106
+ [More Information Needed]
107
+
108
+ ## Dataset Creation
109
+
110
+ ### Curation Rationale
111
+
112
+ [More Information Needed]
113
+
114
+ ### Source Data
115
+
116
+ #### Initial Data Collection and Normalization
117
+
118
+ [More Information Needed]
119
+
120
+ #### Who are the source language producers?
121
+
122
+ [More Information Needed]
123
+
124
+ ### Annotations
125
+
126
+ #### Annotation process
127
+
128
+ [More Information Needed]
129
+
130
+ #### Who are the annotators?
131
+
132
+ [More Information Needed]
133
+
134
+ ### Personal and Sensitive Information
135
+
136
+ [More Information Needed]
137
+
138
+ ## Considerations for Using the Data
139
+
140
+ ### Social Impact of Dataset
141
+
142
+ [More Information Needed]
143
+
144
+ ### Discussion of Biases
145
+
146
+ [More Information Needed]
147
+
148
+ ### Other Known Limitations
149
+
150
+ [More Information Needed]
151
+
152
+ ## Additional Information
153
+
154
+ ### Dataset Curators
155
+
156
+ [More Information Needed]
157
+
158
+ ### Licensing Information
159
+
160
+ [More Information Needed]
161
+
162
+ ### Citation Information
163
+
164
+ [More Information Needed]
165
+
166
+ ### Contributions
167
+
168
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
169
+
squad-sk.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """SQUAD-SK: The Slovak Translation of Stanford Question Answering Dataset."""
18
+
19
+
20
+ import json
21
+
22
+ import datasets
23
+ from datasets.tasks import QuestionAnsweringExtractive
24
+
25
+
26
+ logger = datasets.logging.get_logger(__name__)
27
+
28
+
29
+ _CITATION = """\
30
+ TBD
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ Slovak translation of Standford Question Answering Dataset
35
+ """
36
+
37
+ _URL = "https://files.kemt.fei.tuke.sk/corpora/sk-quad/squad-sk-230321.tar.gz"
38
+
39
+ _FILES = {
40
+ "dev": "squad-sk/dev-230321.json",
41
+ "train": "squad-sk/train-230321.json",
42
+ }
43
+
44
+ class SquadSkConfig(datasets.BuilderConfig):
45
+ """BuilderConfig for SQUAD."""
46
+
47
+ def __init__(self, **kwargs):
48
+ """BuilderConfig for SQUAD.
49
+ Args:
50
+ **kwargs: keyword arguments forwarded to super.
51
+ """
52
+ super(SkQuadConfig, self).__init__(**kwargs)
53
+
54
+
55
+ class SquadSk(datasets.GeneratorBasedBuilder):
56
+ """SQUAD: The Stanford Question Answering Dataset. Version 1.1."""
57
+
58
+ BUILDER_CONFIGS = [
59
+ SkQuadConfig(
60
+ name="plain_text",
61
+ version=datasets.Version("1.1.1", ""),
62
+ description="Plain text",
63
+ ),
64
+ ]
65
+
66
+ def _info(self):
67
+ return datasets.DatasetInfo(
68
+ description=_DESCRIPTION,
69
+ features=datasets.Features(
70
+ {
71
+ "id": datasets.Value("string"),
72
+ "title": datasets.Value("string"),
73
+ "context": datasets.Value("string"),
74
+ "question": datasets.Value("string"),
75
+ "answers": datasets.features.Sequence(
76
+ {
77
+ "text": datasets.Value("string"),
78
+ "answer_start": datasets.Value("int32"),
79
+ }
80
+ ),
81
+ }
82
+ ),
83
+ # No default supervised_keys (as we have to pass both question
84
+ # and context as input).
85
+ supervised_keys=None,
86
+ homepage="https://rajpurkar.github.io/SQuAD-explorer/",
87
+ citation=_CITATION,
88
+ task_templates=[
89
+ QuestionAnsweringExtractive(
90
+ question_column="question", context_column="context", answers_column="answers"
91
+ )
92
+ ],
93
+ )
94
+
95
+ def _split_generators(self, dl_manager):
96
+ downloaded_dir = dl_manager.download_and_extract(_URL)
97
+ print(downloaded_dir)
98
+ return [
99
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_dir + "/" + _FILES["train"]}),
100
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_dir+ "/" + _FILES["dev"]}),
101
+ ]
102
+
103
+ def _generate_examples(self, filepath):
104
+ """This function returns the examples in the raw (text) form."""
105
+ logger.info("generating examples from = %s", filepath)
106
+ key = 0
107
+ with open(filepath, encoding="utf-8") as f:
108
+ squad = json.load(f)
109
+ for article in squad["data"]:
110
+ title = article.get("title", "")
111
+ for paragraph in article["paragraphs"]:
112
+ context = paragraph["context"] # do not strip leading blank spaces GH-2585
113
+ for qa in paragraph["qas"]:
114
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
115
+ assert len(qa["question"]) > 0
116
+ #if len(answer_starts) == 0:
117
+ # continue
118
+ answers = [answer["text"] for answer in qa["answers"]]
119
+ assert len(answer_starts) == len(answers)
120
+ # Features currently used are "context", "question", and "answers".
121
+ # Others are extracted here for the ease of future expansions.
122
+ yield key, {
123
+ "title": title,
124
+ "context": context,
125
+ "question": qa["question"],
126
+ "id": qa["id"],
127
+ "answers": {
128
+ "answer_start": answer_starts,
129
+ "text": answers,
130
+ },
131
+ }
132
+ key += 1