Oleg Somov commited on
Commit
9012b42
1 Parent(s): b687218

change paths for data and update readme

Browse files
Files changed (2) hide show
  1. README.md +25 -72
  2. pauq.py +8 -8
README.md CHANGED
@@ -1,56 +1,11 @@
1
  ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- - machine-generated
7
- language:
8
- - en
9
- - ru
10
- license:
11
- - cc-by-4.0
12
- multilinguality:
13
- - monolingual
14
- - multilingua;
15
- size_categories:
16
- - 1K<n<10K
17
- source_datasets:
18
- - spider
19
- task_categories:
20
- - text2text-generation
21
- task_ids: []
22
- pretty_name: Pauq
23
- tags:
24
- - text-to-sql
25
- dataset_info:
26
- features:
27
- - name: db_id
28
- dtype: string
29
- - name: query
30
- dtype: string
31
- - name: question
32
- dtype: string
33
- - name: query_toks
34
- sequence: string
35
- - name: query_toks_no_value
36
- sequence: string
37
- - name: question_toks
38
- sequence: string
39
- config_name: pauq
40
- splits:
41
- - name: train
42
- num_bytes:
43
- num_examples:
44
- - name: validation
45
- num_bytes:
46
- num_examples:
47
- download_size:
48
- dataset_size:
49
  ---
50
 
51
- # Dataset Card for Spider
52
 
53
  ## Table of Contents
 
54
  - [Dataset Description](#dataset-description)
55
  - [Dataset Summary](#dataset-summary)
56
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
@@ -76,42 +31,33 @@ dataset_info:
76
 
77
  ## Dataset Description
78
 
79
- - **Homepage:**
80
- - **Repository:**
81
- - **Paper:**
82
- - **Point of Contact:**
 
83
 
84
  ### Dataset Summary
85
 
 
86
 
87
  ### Supported Tasks and Leaderboards
88
 
 
89
 
90
  ### Languages
91
 
 
92
 
93
  ## Dataset Structure
94
 
95
  ### Data Instances
96
 
97
- **What do the instances that comprise the dataset represent?**
98
-
99
- Each instance is natural language question and the equivalent SQL query
100
-
101
- **How many instances are there in total?**
102
-
103
- **What data does each instance consist of?**
104
-
105
  [More Information Needed]
106
 
107
  ### Data Fields
108
 
109
- * **db_id**: Database name
110
- * **question**: Natural language to interpret into SQL
111
- * **query**: Target SQL query
112
- * **query_toks**: List of tokens for the query
113
- * **query_toks_no_value**: List of tokens for the query
114
- * **question_toks**: List of tokens for the question
115
 
116
  ### Data Splits
117
 
@@ -127,6 +73,8 @@ Each instance is natural language question and the equivalent SQL query
127
 
128
  #### Initial Data Collection and Normalization
129
 
 
 
130
  #### Who are the source language producers?
131
 
132
  [More Information Needed]
@@ -135,8 +83,12 @@ Each instance is natural language question and the equivalent SQL query
135
 
136
  #### Annotation process
137
 
 
 
138
  #### Who are the annotators?
139
 
 
 
140
  ### Personal and Sensitive Information
141
 
142
  [More Information Needed]
@@ -145,15 +97,17 @@ Each instance is natural language question and the equivalent SQL query
145
 
146
  ### Social Impact of Dataset
147
 
 
 
148
  ### Discussion of Biases
149
 
150
  [More Information Needed]
151
 
152
  ### Other Known Limitations
153
 
154
- ## Additional Information
155
 
156
- The listed authors in the homepage are maintaining/supporting the dataset.
157
 
158
  ### Dataset Curators
159
 
@@ -161,13 +115,12 @@ The listed authors in the homepage are maintaining/supporting the dataset.
161
 
162
  ### Licensing Information
163
 
164
- The spider dataset is licensed under
165
- the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
166
-
167
  [More Information Needed]
168
 
169
  ### Citation Information
170
 
 
171
 
172
  ### Contributions
173
- >>>>>>> master
 
 
1
  ---
2
+ TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
+ # Dataset Card for [Dataset Name]
6
 
7
  ## Table of Contents
8
+ - [Table of Contents](#table-of-contents)
9
  - [Dataset Description](#dataset-description)
10
  - [Dataset Summary](#dataset-summary)
11
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
 
31
 
32
  ## Dataset Description
33
 
34
+ - **Homepage:**
35
+ - **Repository:**
36
+ - **Paper:**
37
+ - **Leaderboard:**
38
+ - **Point of Contact:**
39
 
40
  ### Dataset Summary
41
 
42
+ [More Information Needed]
43
 
44
  ### Supported Tasks and Leaderboards
45
 
46
+ [More Information Needed]
47
 
48
  ### Languages
49
 
50
+ [More Information Needed]
51
 
52
  ## Dataset Structure
53
 
54
  ### Data Instances
55
 
 
 
 
 
 
 
 
 
56
  [More Information Needed]
57
 
58
  ### Data Fields
59
 
60
+ [More Information Needed]
 
 
 
 
 
61
 
62
  ### Data Splits
63
 
 
73
 
74
  #### Initial Data Collection and Normalization
75
 
76
+ [More Information Needed]
77
+
78
  #### Who are the source language producers?
79
 
80
  [More Information Needed]
 
83
 
84
  #### Annotation process
85
 
86
+ [More Information Needed]
87
+
88
  #### Who are the annotators?
89
 
90
+ [More Information Needed]
91
+
92
  ### Personal and Sensitive Information
93
 
94
  [More Information Needed]
 
97
 
98
  ### Social Impact of Dataset
99
 
100
+ [More Information Needed]
101
+
102
  ### Discussion of Biases
103
 
104
  [More Information Needed]
105
 
106
  ### Other Known Limitations
107
 
108
+ [More Information Needed]
109
 
110
+ ## Additional Information
111
 
112
  ### Dataset Curators
113
 
 
115
 
116
  ### Licensing Information
117
 
 
 
 
118
  [More Information Needed]
119
 
120
  ### Citation Information
121
 
122
+ [More Information Needed]
123
 
124
  ### Contributions
125
+
126
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
pauq.py CHANGED
@@ -144,49 +144,49 @@ class Pauq(datasets.GeneratorBasedBuilder):
144
  datasets.SplitGenerator(
145
  name=datasets.Split.TRAIN,
146
  gen_kwargs={
147
- "data_filepath": os.path.join(downloaded_filepath, "splits/ru_iid_train.json"),
148
  },
149
  ),
150
  datasets.SplitGenerator(
151
  name=datasets.Split.TEST,
152
  gen_kwargs={
153
- "data_filepath": os.path.join(downloaded_filepath, "splits/ru_iid_test.json"),
154
  },
155
  ),
156
  datasets.SplitGenerator(
157
  name=datasets.Split.TRAIN,
158
  gen_kwargs={
159
- "data_filepath": os.path.join(downloaded_filepath, "splits/ru_tl_train.json"),
160
  },
161
  ),
162
  datasets.SplitGenerator(
163
  name=datasets.Split.TEST,
164
  gen_kwargs={
165
- "data_filepath": os.path.join(downloaded_filepath, "splits/ru_tl_test.json"),
166
  },
167
  ),
168
  datasets.SplitGenerator(
169
  name=datasets.Split.TRAIN,
170
  gen_kwargs={
171
- "data_filepath": os.path.join(downloaded_filepath, "splits/en_iid_train.json"),
172
  },
173
  ),
174
  datasets.SplitGenerator(
175
  name=datasets.Split.TEST,
176
  gen_kwargs={
177
- "data_filepath": os.path.join(downloaded_filepath, "splits/en_iid_test.json"),
178
  },
179
  ),
180
  datasets.SplitGenerator(
181
  name=datasets.Split.TRAIN,
182
  gen_kwargs={
183
- "data_filepath": os.path.join(downloaded_filepath, "splits/en_tl_train.json"),
184
  },
185
  ),
186
  datasets.SplitGenerator(
187
  name=datasets.Split.TEST,
188
  gen_kwargs={
189
- "data_filepath": os.path.join(downloaded_filepath, "splits/en_tl_test.json"),
190
  },
191
  ),
192
  ]
 
144
  datasets.SplitGenerator(
145
  name=datasets.Split.TRAIN,
146
  gen_kwargs={
147
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/ru_iid_train.json"),
148
  },
149
  ),
150
  datasets.SplitGenerator(
151
  name=datasets.Split.TEST,
152
  gen_kwargs={
153
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/ru_iid_test.json"),
154
  },
155
  ),
156
  datasets.SplitGenerator(
157
  name=datasets.Split.TRAIN,
158
  gen_kwargs={
159
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/ru_tl_train.json"),
160
  },
161
  ),
162
  datasets.SplitGenerator(
163
  name=datasets.Split.TEST,
164
  gen_kwargs={
165
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/ru_tl_test.json"),
166
  },
167
  ),
168
  datasets.SplitGenerator(
169
  name=datasets.Split.TRAIN,
170
  gen_kwargs={
171
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/en_iid_train.json"),
172
  },
173
  ),
174
  datasets.SplitGenerator(
175
  name=datasets.Split.TEST,
176
  gen_kwargs={
177
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/en_iid_test.json"),
178
  },
179
  ),
180
  datasets.SplitGenerator(
181
  name=datasets.Split.TRAIN,
182
  gen_kwargs={
183
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/en_tl_train.json"),
184
  },
185
  ),
186
  datasets.SplitGenerator(
187
  name=datasets.Split.TEST,
188
  gen_kwargs={
189
+ "data_filepath": os.path.join(downloaded_filepath, "formatted_pauq/splits/en_tl_test.json"),
190
  },
191
  ),
192
  ]