Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
SuryaKrishna02 commited on
Commit
5e530a2
1 Parent(s): 11097bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +370 -54
README.md CHANGED
@@ -1,58 +1,374 @@
1
  ---
2
- license: apache-2.0
3
  dataset_info:
4
- - config_name: default
5
- features:
6
- - name: inputs
7
- dtype: string
8
- - name: targets
9
- dtype: string
10
- - name: language
11
- dtype: string
12
- - name: language_code
13
- dtype: string
14
- - name: annotation_type
15
- dtype: string
16
- - name: user_id
17
- dtype: string
18
- splits:
19
- - name: train
20
- num_bytes: 254697574
21
- num_examples: 202364
22
- - name: test
23
- num_bytes: 1782208
24
- num_examples: 1750
25
- download_size: 138163772
26
- dataset_size: 256479782
27
- - config_name: demographics
28
- features:
29
- - name: user_id
30
- dtype: string
31
- - name: age_range
32
- sequence: int64
33
- - name: gender
34
- dtype: string
35
- - name: country
36
- dtype: string
37
- - name: languages
38
- sequence: string
39
- - name: dialects
40
- sequence: string
41
- splits:
42
- - name: train
43
- num_bytes: 202127
44
- num_examples: 1456
45
- download_size: 113702
46
- dataset_size: 202127
47
  configs:
48
- - config_name: default
49
- data_files:
50
- - split: train
51
- path: data/train-*
52
- - split: test
53
- path: data/test-*
54
- - config_name: demographics
55
- data_files:
56
- - split: train
57
- path: demographics/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  dataset_info:
3
+ - config_name: default
4
+ features:
5
+ - name: inputs
6
+ dtype: string
7
+ - name: targets
8
+ dtype: string
9
+ - name: language
10
+ dtype: string
11
+ - name: language_code
12
+ dtype: string
13
+ - name: annotation_type
14
+ dtype: string
15
+ - name: user_id
16
+ dtype: string
17
+ splits:
18
+ - name: train
19
+ num_bytes: 254697574
20
+ num_examples: 202364
21
+ - name: test
22
+ num_bytes: 1782208
23
+ num_examples: 1750
24
+ download_size: 138163772
25
+ dataset_size: 256479782
26
+ - config_name: demographics
27
+ features:
28
+ - name: user_id
29
+ dtype: string
30
+ - name: age_range
31
+ sequence: int64
32
+ - name: gender
33
+ dtype: string
34
+ - name: country
35
+ dtype: string
36
+ - name: languages
37
+ sequence: string
38
+ - name: dialects
39
+ sequence: string
40
+ splits:
41
+ - name: train
42
+ num_bytes: 202127
43
+ num_examples: 1456
44
+ download_size: 113702
45
+ dataset_size: 202127
46
  configs:
47
+ - config_name: default
48
+ data_files:
49
+ - split: train
50
+ path: data/train-*
51
+ - split: test
52
+ path: data/test-*
53
+ - config_name: demographics
54
+ data_files:
55
+ - split: train
56
+ path: demographics/train-*
57
+ annotations_creators:
58
+ - crowdsourced
59
+ - expert-generated
60
+ language:
61
+ - amh
62
+ - arb
63
+ - ary
64
+ - ars
65
+ - acq
66
+ - arz
67
+ - apc
68
+ - ben
69
+ - ceb
70
+ - dan
71
+ - deu
72
+ - ell
73
+ - eng
74
+ - eus
75
+ - fil
76
+ - fin
77
+ - fra
78
+ - gle
79
+ - guj
80
+ - hat
81
+ - hau
82
+ - hin
83
+ - hun
84
+ - ibo
85
+ - ind
86
+ - ita
87
+ - jav
88
+ - jpn
89
+ - kan
90
+ - kir
91
+ - kor
92
+ - kur
93
+ - lit
94
+ - mal
95
+ - mar
96
+ - mlg
97
+ - msa
98
+ - mya
99
+ - nep
100
+ - nld
101
+ - nso
102
+ - nya
103
+ - pan
104
+ - pes
105
+ - pol
106
+ - por
107
+ - pus
108
+ - rus
109
+ - sin
110
+ - sna
111
+ - snd
112
+ - som
113
+ - spa
114
+ - sqi
115
+ - srp
116
+ - sun
117
+ - swa
118
+ - swe
119
+ - tam
120
+ - tel
121
+ - tha
122
+ - tur
123
+ - ukr
124
+ - urd
125
+ - vie
126
+ - wol
127
+ - xho
128
+ - yor
129
+ - zho
130
+ - zul
131
+ language_creators:
132
+ - crowdsourced
133
+ - expert-generated
134
+ license:
135
+ - apache-2.0
136
+ multilinguality:
137
+ - multilingual
138
+ pretty_name: Aya Dataset
139
+ size_categories:
140
+ - 100K<n<1M
141
+ source_datasets:
142
+ - original
143
+ tags: []
144
+ task_categories:
145
+ - other
146
+ task_ids: []
147
  ---
148
+ ![Aya Header](https://huggingface.co/datasets/CohereForAI/aya_dataset/blob/main/aya_header.png)
149
+ # Dataset Summary
150
+ The `Aya Dataset` is a multilingual instruction fine-tuning dataset curated by an open-science community via [Aya Annotation Platform](https://aya.for.ai/) from Cohere For AI. The dataset contains a total of 204k human-annotated prompt-completion pairs along with the demographics data of the annotaters.
151
+
152
+ - **Curated by:** Contributors of [Aya Open Science Intiative](https://aya.for.ai/).
153
+ - **Language(s):** 65 languages (71 including dialects & scripts).
154
+ - **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
155
+ - **Supported Tasks:**
156
+ - Training/Fine-tuning LLMs
157
+ - Synthetic Data Generation
158
+ - Data Augmentation
159
+ - **Aya Datasets Family:**
160
+
161
+ | Name | Explanation |
162
+ |------|--------------|
163
+ | [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
164
+ | [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages, providing 513k instances for various tasks.|
165
+ | [aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
166
+
167
+ # Dataset
168
+ The `Aya Dataset` comprises of two types of data:
169
+ 1. **Human Annotations:** Original annotations (writing new examples from scratch) and a subset of re-annotations (editing existing examples to improve the quality and comprehensiveness), included only when there's a discernible difference between the original and edited versions. The selection is based on character-level Levenshtein distance, with an acceptance threshold of (d ≥ 5) to avoid duplicates. Languages with at least 50 contributions are included in the dataset, striking a balance between data quality and practical human resource limitations. The aim is to maximize language inclusion without compromising dataset quality.
170
+ 2. **Demographics Data:** Data collected while onboarding a new user into [Aya Annotation Platform](https://aya.for.ai/). We collected some demographic information like country, gender, age range, and the languages along with their dialects that they are comfortable annotating data in.
171
+
172
+ ## Load with Datasets
173
+ To load this dataset consisting of both prompt-completions and demographics data with `datasets`, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
174
+
175
+ ```python
176
+ from datasets import load_dataset
177
+
178
+ # Load the annotations dataset
179
+ aya_dataset = load_dataset("CohereForAI/aya_dataset")
180
+
181
+ # Load the demographics dataset
182
+ aya_demographics = load_dataset("CohereForAI/aya_dataset", "demographics")
183
+ ```
184
+
185
+ ## Data Fields
186
+ ### Human Annotations (Default)
187
+ The data fields are the same among all splits:
188
+ - `prompt`: Prompt or input to the language model.
189
+ - `completion`: Completion or output of the language model.
190
+ - `language`: The language of the `prompt` and `completion`.
191
+ - `language_code`: The ISO code for the language of the `prompt` and `completion`.
192
+ - `annotation_type`: The value denoting whether `prompt` and `completion` are 'original_annotations' or 're-annotations'.
193
+ - `user_id`: Unique identifier of the annotator who submitted the prompt-completion pair.
194
+
195
+ ### Demographics Data
196
+ The data fields are the same among all splits:
197
+ - `user_id`: Unique identifier of the annotator who submitted the prompt-completion pair.
198
+ - `age_range`: Age of the annotator. Ranges from 0 to 121.
199
+ - `gender`: Gender of the annotator. The values are 'male', 'female', 'prefer not to say', 'non-binary' and 'others'.
200
+ - `languages`: List of languages spoken by the annotator.
201
+ - `dialects`: Dialects reported by the annotator.
202
+
203
+ ## Data Splits
204
+ ### Human Annotations (Default)
205
+ The following are the splits of the data:
206
+ | Split | No. of instances |
207
+ |-------|-----------|
208
+ | train | 202,364|
209
+ | test | 1,750|
210
+
211
+ ### Demographics Data
212
+ The following are the splits of the data:
213
+ | Split | No. of Instances |
214
+ |-------|-----------|
215
+ | train | 1,456 |
216
+
217
+ ## Data Instances
218
+ ### Human Annotations (Default)
219
+ An example of `train` looks as follows:
220
+ ```json
221
+ {
222
+ "prompt": "What cultural events or festivals add vibrancy to Colombo's calendar...",
223
+ "completion": "Colombo's cultural calendar is adorned with diverse events and festivals that celebrate the city's rich tapestry of traditions...",
224
+ "language": "English",
225
+ "language_code": "eng",
226
+ "annotation_type": "original-annotations",
227
+ "user_id": "f0ff69570af705b75c5a0851883e..."
228
+ }
229
+ ```
230
+
231
+ ### Demographics Data
232
+ An example of `train` looks as follows:
233
+ ```json
234
+ {
235
+ "user_id": "f0ff69570af705b75c5a0851883e...",
236
+ "age_range": [ 25, 35 ],
237
+ "gender": "female",
238
+ "languages": [ "English", "Hausa" ],
239
+ "dialects": [ "Hausa" ]
240
+ }
241
+ ```
242
+
243
+ ## Statistics
244
+ ### Annotation Types
245
+ The following is the breakdown of original annotations and re-annotations in the final dataset.
246
+
247
+ | Type of Annotation | Instances |
248
+ |--------------------|-----------|
249
+ | Original Annotations | 138,844 |
250
+ | Re-Annotations | 65,270 |
251
+ | Total | 204,114|
252
+
253
+ ### Languages
254
+ The dataset covers 65 languages: 28 high-resource, 12 mid-resource, and 31 low-resource languages. The following is details about the languages, dialects & scripts included in the dataset.
255
+
256
+ <details>
257
+ <summary> Languages Info </summary>
258
+
259
+ | ISO Code | Language | Resources |
260
+ |----------|----------|-----------|
261
+ | `amh` | Amharic | Low |
262
+ | `arb`, `ary`, `ars`, `acq`, `arz` & `apc` | Arabic (Standard, Moroccan, Najdi, Ta'izzi-Adeni, Egyptian & South Levantine) | High |
263
+ | `ben` | Bengali | Mid |
264
+ | `ceb` | Cebuano | Mid |
265
+ | `dan` | Danish | Mid |
266
+ | `deu` | German | High |
267
+ | `ell` | Greek | Mid |
268
+ | `eng` | English | High |
269
+ | `eus` | Basque | High |
270
+ | `fil` | Filipino | Mid |
271
+ | `fin` | Finnish | Mid |
272
+ | `fra` | French | High |
273
+ | `gle` | Irish | Low |
274
+ | `guj` | Gujarati | Low |
275
+ | `hat` | Haitian Creole | Low |
276
+ | `hau` | Hausa | Low |
277
+ | `hin` | Hindi | High |
278
+ | `hun` | Hungarian | High |
279
+ | `ibo` | Igbo | Low |
280
+ | `ind` | Indonesian | Mid |
281
+ | `ita` | Italian | High |
282
+ | `jav` | Javanese | Low |
283
+ | `jpn` | Japanese | High |
284
+ | `kan` | Kannada | Low |
285
+ | `kir` | Kyrgyz | Low |
286
+ | `kor` | Korean | Mid |
287
+ | `kur` | Kurdish | Low |
288
+ | `lit` | Lithuanian | Mid |
289
+ | `mal` | Malayalam | Low |
290
+ | `mar` | Marathi | Low |
291
+ | `mlg` | Malagasy | Low |
292
+ | `msa` | Malay | Mid |
293
+ | `mya` | Burmese | Low |
294
+ | `nep` | Nepali | Low |
295
+ | `nld` | Dutch | High |
296
+ | `nso` | Northern Sotho | Low |
297
+ | `nya` | Chichewa | Low |
298
+ | `pan` | Punjabi | Low |
299
+ | `pes` | Persian | High |
300
+ | `pol` | Polish | High |
301
+ | `por` | Portuguese | High |
302
+ | `pus` | Pashto | Low |
303
+ | `rus` | Russian | High |
304
+ | `sin` | Sinhala | Low |
305
+ | `sna` | Shona | Low |
306
+ | `snd` | Sindhi | Low |
307
+ | `som` | Somali | Low |
308
+ | `spa` | Spanish | High |
309
+ | `sqi` | Albanian | Low |
310
+ | `srp` | Serbian | High |
311
+ | `sun` | Sundanese | Low |
312
+ | `swa` | Swahili | Low |
313
+ | `swe` | Swedish | High |
314
+ | `tam` | Tamil | Mid |
315
+ | `tel` | Telugu | Low |
316
+ | `tha` | Thai | Mid |
317
+ | `tur` | Turkish | High |
318
+ | `ukr` | Ukrainian | Mid |
319
+ | `urd` | Urdu | Mid |
320
+ | `vie` | Vietnamese | High |
321
+ | `wol` | Wolof | Low |
322
+ | `xho` | Xhosa | Low |
323
+ | `yor` | Yorùbá | Low |
324
+ | `zho` | Chinese (Traditional & Simplified) | High |
325
+ | `zul` | Zulu | Low |
326
+ </details>
327
+ <br>
328
+
329
+ # Motivations & Intentions
330
+ - **Curation Rationale:** The curation effort employed an open-science approach to create a diverse instruction-style dataset through annotators across the globe that ensures comprehensive representation across all languages. The success of the curation effort, led by volunteers across diverse backgrounds, was significantly influenced by their hope to meaningfully bring NLP advancements to their languages.
331
+
332
+
333
+ # Known Limitations
334
+ - **Language and dialect coverage:** The dataset covers a limited fraction of the world's linguistic diversity, with 93% of languages not represented, facing challenges in distinguishing between languages and dialects, lacking coverage for many regional dialects, and excluding programming languages.
335
+ - **Uneven distribution of contributions:** The dataset contains contributions in annotation activities, with a 'long tail' of annotators making only one or two contributions, leading to potential dataset imbalances across languages and a lack of diversity within certain language annotations.
336
+ - **Cultural and Personal Bias:** In the dataset, certain languages have limited representation due to a few dominant annotators, potentially leading to a narrow viewpoint and skewed distribution of content, particularly towards certain domains like news.
337
+ - **Gendered Pronouns:** Many of the languages in the Aya Dataset only contain pronouns that are explicitly gendered (e.g., Arabic) or that lack gender-neutral third-person pronouns for gender-neutral reference (e.g. Estonian).
338
+ - **Formality Distinctions:** The dataset encompasses languages with diverse formality distinctions, involving honorifics and situational choices in pronoun use, reflecting varying levels of standardization influenced by regional, cultural, and identity factors.
339
+ - **Toxic or Offensive Speech:** The Aya Annotation Platform lacked specific flags for toxic speech, relying on human verification and peer review to mitigate offensive content, but there's no guarantee that all potentially offensive data points were removed during the annotation process.
340
+ - **Accounting for mislabeled data:** The Aya Annotation Platform lacks re-labeling capabilities, leading to potential mislabeled data in the Aya Dataset, including instances of incorrect language assignments and non-compliance with instruction-style formatting.
341
+
342
+
343
+ # Additional Information
344
+ ## Provenance
345
+ - **Methods Used:** Crowd-sourced through volunteer annotations, followed by a quality assessment phase in which samples from the dataset were checked.
346
+ - **Methodology Details:**
347
+ - *Source:* Original annotations and edits of opensource NLP datasets
348
+ - *Platform:* [Aya Annotation Platform](https://aya.for.ai/)
349
+ - *Dates of Collection:* Jun 2023 - Dec 2023
350
+
351
+
352
+ ## Dataset Version and Maintenance
353
+ - **Maintenance Status:** Actively Maintained
354
+ - **Version Details:**
355
+ - *Current version:* 1.0
356
+ - *Last Update:* 12/2023
357
+ - *First Release:* 02/2024
358
+ - **Maintenance Plan:** Updates will be periodically made available based on volunteer contributions.
359
+
360
+
361
+ ## Authorship
362
+ - **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
363
+ - **Industry Type:** Not-for-profit - Tech
364
+ - **Contact Details:** https://aya.for.ai/
365
+
366
+
367
+ ## Licensing Information
368
+ This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
369
+
370
+
371
+ ## Citation Information
372
+ ```bibtex
373
+ [TODO]
374
+ ```