versae commited on
Commit
0187a27
1 Parent(s): b885ac4

Adding boundaries to gaussian

Browse files
Files changed (2) hide show
  1. README.md +44 -300
  2. mc4-sampling.py +1 -1
README.md CHANGED
@@ -135,33 +135,21 @@ task_ids:
135
  paperswithcode_id: mc4
136
  ---
137
 
138
- # Dataset Card for mC4
139
 
140
  ## Table of Contents
141
 
142
- - [Dataset Card for mC4](#dataset-card-for-mc4)
143
  - [Table of Contents](#table-of-contents)
144
  - [Dataset Description](#dataset-description)
145
  - [Dataset Summary](#dataset-summary)
 
146
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
147
  - [Languages](#languages)
148
  - [Dataset Structure](#dataset-structure)
149
  - [Data Instances](#data-instances)
150
  - [Data Fields](#data-fields)
151
  - [Data Splits](#data-splits)
152
- - [Dataset Creation](#dataset-creation)
153
- - [Curation Rationale](#curation-rationale)
154
- - [Source Data](#source-data)
155
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
156
- - [Who are the source language producers?](#who-are-the-source-language-producers)
157
- - [Annotations](#annotations)
158
- - [Annotation process](#annotation-process)
159
- - [Who are the annotators?](#who-are-the-annotators)
160
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
161
- - [Considerations for Using the Data](#considerations-for-using-the-data)
162
- - [Social Impact of Dataset](#social-impact-of-dataset)
163
- - [Discussion of Biases](#discussion-of-biases)
164
- - [Other Known Limitations](#other-known-limitations)
165
  - [Additional Information](#additional-information)
166
  - [Dataset Curators](#dataset-curators)
167
  - [Licensing Information](#licensing-information)
@@ -170,131 +158,15 @@ paperswithcode_id: mc4
170
 
171
  ## Dataset Description
172
 
173
- - **Homepage:** https://huggingface.co/datasets/allenai/c4
174
- - **Paper:** https://arxiv.org/abs/1910.10683
175
 
176
  ### Dataset Summary
177
 
178
- This dataset builds upon the original mC4 and adds perplexity sampling methods to perform perplexity-based filtering on the fly. Please, refer to [BERTIN Project](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
179
-
180
- The original dataset is the multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
181
-
182
- This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
183
-
184
- 108 languages are available and are reported in the table below.
185
-
186
- Note that the languages that end with "-Latn" are simply romanized variants, i.e. written using the Latin script.
187
-
188
- | language code | language name |
189
- |:----------------|:---------------------|
190
- | af | Afrikaans |
191
- | am | Amharic |
192
- | ar | Arabic |
193
- | az | Azerbaijani |
194
- | be | Belarusian |
195
- | bg | Bulgarian |
196
- | bg-Latn | Bulgarian (Latin) |
197
- | bn | Bangla |
198
- | ca | Catalan |
199
- | ceb | Cebuano |
200
- | co | Corsican |
201
- | cs | Czech |
202
- | cy | Welsh |
203
- | da | Danish |
204
- | de | German |
205
- | el | Greek |
206
- | el-Latn | Greek (Latin) |
207
- | en | English |
208
- | eo | Esperanto |
209
- | es | Spanish |
210
- | et | Estonian |
211
- | eu | Basque |
212
- | fa | Persian |
213
- | fi | Finnish |
214
- | fil | Filipino |
215
- | fr | French |
216
- | fy | Western Frisian |
217
- | ga | Irish |
218
- | gd | Scottish Gaelic |
219
- | gl | Galician |
220
- | gu | Gujarati |
221
- | ha | Hausa |
222
- | haw | Hawaiian |
223
- | hi | Hindi |
224
- | hi-Latn | Hindi (Latin script) |
225
- | hmn | Hmong, Mong |
226
- | ht | Haitian |
227
- | hu | Hungarian |
228
- | hy | Armenian |
229
- | id | Indonesian |
230
- | ig | Igbo |
231
- | is | Icelandic |
232
- | it | Italian |
233
- | iw | former Hebrew |
234
- | ja | Japanese |
235
- | ja-Latn | Japanese (Latin) |
236
- | jv | Javanese |
237
- | ka | Georgian |
238
- | kk | Kazakh |
239
- | km | Khmer |
240
- | kn | Kannada |
241
- | ko | Korean |
242
- | ku | Kurdish |
243
- | ky | Kyrgyz |
244
- | la | Latin |
245
- | lb | Luxembourgish |
246
- | lo | Lao |
247
- | lt | Lithuanian |
248
- | lv | Latvian |
249
- | mg | Malagasy |
250
- | mi | Maori |
251
- | mk | Macedonian |
252
- | ml | Malayalam |
253
- | mn | Mongolian |
254
- | mr | Marathi |
255
- | ms | Malay |
256
- | mt | Maltese |
257
- | my | Burmese |
258
- | ne | Nepali |
259
- | nl | Dutch |
260
- | no | Norwegian |
261
- | ny | Nyanja |
262
- | pa | Punjabi |
263
- | pl | Polish |
264
- | ps | Pashto |
265
- | pt | Portuguese |
266
- | ro | Romanian |
267
- | ru | Russian |
268
- | ru-Latn | Russian (Latin) |
269
- | sd | Sindhi |
270
- | si | Sinhala |
271
- | sk | Slovak |
272
- | sl | Slovenian |
273
- | sm | San Marino |
274
- | sn | Shona |
275
- | so | Somali |
276
- | sq | Albanian |
277
- | sr | Serbian |
278
- | st | Southern Sotho |
279
- | su | Sundanese |
280
- | sv | Swedish |
281
- | sw | Swahili |
282
- | ta | Tamil |
283
- | te | Telugu |
284
- | tg | Tajik |
285
- | th | Thai |
286
- | tr | Turkish |
287
- | uk | Ukrainian |
288
- | und | Unknown language |
289
- | ur | Urdu |
290
- | uz | Uzbek |
291
- | vi | Vietnamese |
292
- | xh | Xhosa |
293
- | yi | Yiddish |
294
- | yo | Yoruba |
295
- | zh | Chinese |
296
- | zh-Latn | Chinese (Latin) |
297
- | zu | Zulu |
298
 
299
  You can load the mC4 subset of any language like this:
300
 
@@ -312,9 +184,40 @@ from datasets import load_dataset
312
  mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
313
  ```
314
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
315
  ### Supported Tasks and Leaderboards
316
 
317
- mC4 is mainly intended to pretrain language models and word representations.
318
 
319
  ### Languages
320
 
@@ -357,176 +260,17 @@ The data have several fields:
357
 
358
  ### Data Splits
359
 
360
- To build mC4, the authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. The resulting mC4 subsets for each language are reported in this table:
361
-
362
- | config | train | validation |
363
- |:---------|:--------|:-------------|
364
- | af | ? | ? |
365
- | am | ? | ? |
366
- | ar | ? | ? |
367
- | az | ? | ? |
368
- | be | ? | ? |
369
- | bg | ? | ? |
370
- | bg-Latn | ? | ? |
371
- | bn | ? | ? |
372
- | ca | ? | ? |
373
- | ceb | ? | ? |
374
- | co | ? | ? |
375
- | cs | ? | ? |
376
- | cy | ? | ? |
377
- | da | ? | ? |
378
- | de | ? | ? |
379
- | el | ? | ? |
380
- | el-Latn | ? | ? |
381
- | en | ? | ? |
382
- | eo | ? | ? |
383
- | es | ? | ? |
384
- | et | ? | ? |
385
- | eu | ? | ? |
386
- | fa | ? | ? |
387
- | fi | ? | ? |
388
- | fil | ? | ? |
389
- | fr | ? | ? |
390
- | fy | ? | ? |
391
- | ga | ? | ? |
392
- | gd | ? | ? |
393
- | gl | ? | ? |
394
- | gu | ? | ? |
395
- | ha | ? | ? |
396
- | haw | ? | ? |
397
- | hi | ? | ? |
398
- | hi-Latn | ? | ? |
399
- | hmn | ? | ? |
400
- | ht | ? | ? |
401
- | hu | ? | ? |
402
- | hy | ? | ? |
403
- | id | ? | ? |
404
- | ig | ? | ? |
405
- | is | ? | ? |
406
- | it | ? | ? |
407
- | iw | ? | ? |
408
- | ja | ? | ? |
409
- | ja-Latn | ? | ? |
410
- | jv | ? | ? |
411
- | ka | ? | ? |
412
- | kk | ? | ? |
413
- | km | ? | ? |
414
- | kn | ? | ? |
415
- | ko | ? | ? |
416
- | ku | ? | ? |
417
- | ky | ? | ? |
418
- | la | ? | ? |
419
- | lb | ? | ? |
420
- | lo | ? | ? |
421
- | lt | ? | ? |
422
- | lv | ? | ? |
423
- | mg | ? | ? |
424
- | mi | ? | ? |
425
- | mk | ? | ? |
426
- | ml | ? | ? |
427
- | mn | ? | ? |
428
- | mr | ? | ? |
429
- | ms | ? | ? |
430
- | mt | ? | ? |
431
- | my | ? | ? |
432
- | ne | ? | ? |
433
- | nl | ? | ? |
434
- | no | ? | ? |
435
- | ny | ? | ? |
436
- | pa | ? | ? |
437
- | pl | ? | ? |
438
- | ps | ? | ? |
439
- | pt | ? | ? |
440
- | ro | ? | ? |
441
- | ru | ? | ? |
442
- | ru-Latn | ? | ? |
443
- | sd | ? | ? |
444
- | si | ? | ? |
445
- | sk | ? | ? |
446
- | sl | ? | ? |
447
- | sm | ? | ? |
448
- | sn | ? | ? |
449
- | so | ? | ? |
450
- | sq | ? | ? |
451
- | sr | ? | ? |
452
- | st | ? | ? |
453
- | su | ? | ? |
454
- | sv | ? | ? |
455
- | sw | ? | ? |
456
- | ta | ? | ? |
457
- | te | ? | ? |
458
- | tg | ? | ? |
459
- | th | ? | ? |
460
- | tr | ? | ? |
461
- | uk | ? | ? |
462
- | und | ? | ? |
463
- | ur | ? | ? |
464
- | uz | ? | ? |
465
- | vi | ? | ? |
466
- | xh | ? | ? |
467
- | yi | ? | ? |
468
- | yo | ? | ? |
469
- | zh | ? | ? |
470
- | zh-Latn | ? | ? |
471
- | zu | ? | ? |
472
-
473
- ## Dataset Creation
474
-
475
- ### Curation Rationale
476
-
477
- [More Information Needed]
478
-
479
- ### Source Data
480
-
481
- #### Initial Data Collection and Normalization
482
-
483
- [More Information Needed]
484
-
485
- #### Who are the source language producers?
486
-
487
- [More Information Needed]
488
-
489
- ### Annotations
490
-
491
- #### Annotation process
492
-
493
- [More Information Needed]
494
-
495
- #### Who are the annotators?
496
-
497
- [More Information Needed]
498
-
499
- ### Personal and Sensitive Information
500
-
501
- [More Information Needed]
502
-
503
- ## Considerations for Using the Data
504
-
505
- ### Social Impact of Dataset
506
-
507
- [More Information Needed]
508
-
509
- ### Discussion of Biases
510
-
511
- [More Information Needed]
512
-
513
- ### Other Known Limitations
514
-
515
- [More Information Needed]
516
 
517
  ## Additional Information
518
 
519
- ### Dataset Curators
520
-
521
- [More Information Needed]
522
-
523
  ### Licensing Information
524
 
525
- AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
526
 
527
  ### Citation Information
528
 
529
- ```
530
  @article{2019t5,
531
  author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
532
  title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
 
135
  paperswithcode_id: mc4
136
  ---
137
 
138
+ # Dataset Card for mC4-sampling
139
 
140
  ## Table of Contents
141
 
142
+ - [Dataset Card for mC4-sampling](#dataset-card-for-mc4-sampling)
143
  - [Table of Contents](#table-of-contents)
144
  - [Dataset Description](#dataset-description)
145
  - [Dataset Summary](#dataset-summary)
146
+ - [Dataset Sampling](#dataset-sampling)
147
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
148
  - [Languages](#languages)
149
  - [Dataset Structure](#dataset-structure)
150
  - [Data Instances](#data-instances)
151
  - [Data Fields](#data-fields)
152
  - [Data Splits](#data-splits)
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  - [Additional Information](#additional-information)
154
  - [Dataset Curators](#dataset-curators)
155
  - [Licensing Information](#licensing-information)
 
158
 
159
  ## Dataset Description
160
 
161
+ - **Homepage:** https://huggingface.co/bertin-project/bertin-roberta-base-spanish
 
162
 
163
  ### Dataset Summary
164
 
165
+ This dataset builds upon the AllenAI version of the original [mC4](https://huggingface.co/datasets/allenai/c4) and adds sampling methods to perform perplexity-based filtering on the fly. Please, refer to [BERTIN Project](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
166
+
167
+ The original dataset is mC4, the multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
168
+
169
+ 108 languages are available and are reported in the [`mc4` dataset](https://huggingface.co/datasets/mc4#dataset-summary).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
170
 
171
  You can load the mC4 subset of any language like this:
172
 
 
184
  mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
185
  ```
186
 
187
+ ### Dataset Sampling
188
+
189
+ There are 3 main different ways of getting sampled versions of mc4 using this dataset.
190
+
191
+ #### Random
192
+
193
+ Arguably, the simplest of methods. It keeps a document based on a probability threshold we called `factor`. It defaults to `0.5` for random sampling:
194
+
195
+ ```python
196
+ def _should_keep_doc_random(self, doc, factor=None, **kwargs):
197
+ factor = 0.5 if factor is None else factor
198
+ return self.rng.uniform() <= factor
199
+ ```
200
+
201
+ The way to use this sampling method is by adding an extra parameter to the instantiation of the dataset:
202
+
203
+ ```python
204
+ from datasets import load_dataset
205
+
206
+ mc4random = load_dataset(
207
+ "bertin-project/mc4-sampling", "es",
208
+ split="train",
209
+ streaming=True,
210
+ sampling_method="random",
211
+ factor=0.5,
212
+ )
213
+ for sample in mc4random:
214
+ print(sample)
215
+ break
216
+ ```
217
+
218
  ### Supported Tasks and Leaderboards
219
 
220
+ mC4-sampling is mainly intended to pretrain language models and word representations on a budget.
221
 
222
  ### Languages
223
 
 
260
 
261
  ### Data Splits
262
 
263
+ The same splits as in [mC4 are available](https://huggingface.co/datasets/mc4#data-splits).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
264
 
265
  ## Additional Information
266
 
 
 
 
 
267
  ### Licensing Information
268
 
269
+ BERTIN Project is releasing this dataset under the same terms AllenAI released mC4, that is, those of the ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
270
 
271
  ### Citation Information
272
 
273
+ ```bibtex
274
  @article{2019t5,
275
  author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
276
  title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
mc4-sampling.py CHANGED
@@ -340,7 +340,7 @@ class Mc4Sampling(datasets.GeneratorBasedBuilder):
340
  probability = factor / quartile_range
341
  return self.rng.uniform() < probability
342
 
343
- def _should_keep_doc_gaussian(self, doc, factor=None, width=None, **kwargs):
344
  perplexity = self.get_perplexity(doc)
345
  width = (9 / 2) if width is None else width # width (spread) of the exponential curve
346
  factor = 0.78 if factor is None else factor
 
340
  probability = factor / quartile_range
341
  return self.rng.uniform() < probability
342
 
343
+ def _should_keep_doc_gaussian(self, doc, factor=None, width=None, boundaries=None, **kwargs):
344
  perplexity = self.get_perplexity(doc)
345
  width = (9 / 2) if width is None else width # width (spread) of the exponential curve
346
  factor = 0.78 if factor is None else factor