Update README.md
Browse files
README.md
CHANGED
@@ -12,11 +12,12 @@ size_categories:
|
|
12 |
|
13 |
[MADLAD-400 (*Multilingual Audited Dataset: Low-resource And Document-level*)](https://arxiv.org/abs/2309.04662) is
|
14 |
a document-level multilingual dataset based on Common Crawl, covering 419
|
15 |
-
languages in total.
|
16 |
1, 2022. The primary advantage of this dataset over similar datasets is that it
|
17 |
is more multilingual (419 languages), it is audited and more highly filtered,
|
18 |
and it is document-level. The main disadvantage is also its strength -- being
|
19 |
-
more filtered, it may lack the recall needed for some applications.
|
|
|
20 |
|
21 |
There are two versions released: the **noisy** dataset, which has no filtering
|
22 |
except document-level LangID, and the **clean** dataset, which has a variety of
|
@@ -57,8 +58,8 @@ the hopes that it will increase robustness to web-domain text.
|
|
57 |
|
58 |
## Filtering
|
59 |
|
60 |
-
Before separating the raw CommonCrawl corpus by LangID,
|
61 |
-
|
62 |
|
63 |
- Discarded any page with fewer than 5 sentences and only retained lines that
|
64 |
contained at least 3 words.
|
@@ -66,8 +67,8 @@ following filtering steps as done by Raffel et al (2020):
|
|
66 |
- Removed any page where the phrase “lorem ipsum” appeared.
|
67 |
- Removed any pages containing the phrases "terms of use", "privacy policy",
|
68 |
"cookie policy", "uses cookies", "use of cookies", "use cookies"
|
69 |
-
-
|
70 |
-
- To deduplicate the data set,
|
71 |
|
72 |
The `noisy` subset of the data was filtered only by document-level LangID, which
|
73 |
was taken to be the majority sentence-level LangID prediction. The `clean`
|
@@ -90,7 +91,7 @@ of the following were true:
|
|
90 |
|
91 |
### Cursed Substrings
|
92 |
|
93 |
-
Based on the initial round of data audits,
|
94 |
substrings and regexes accounting for a large amount of questionable content.
|
95 |
Keep in mind that these all are fed into the `pct_questionable` score -- a
|
96 |
sentence is only excluded from the `clean` dataset if over 20% of the sentences
|
@@ -118,12 +119,12 @@ CURSED_SUBSTRINGS = [" №", "���", "\\|\\s*$", " nr\\.$", "aute irure dol
|
|
118 |
|
119 |
Many languages using Brahmic Abugida (South and Southeast Asian scripts like
|
120 |
Devanagari, Khmer, etc.) use some variant on the virama character. For whatever
|
121 |
-
reason,
|
122 |
-
|
123 |
si th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks_Deva mag mni shn yue zh
|
124 |
-
ja kjg mnw ksw rki mtr mwr xnr`,
|
125 |
|
126 |
-
For these languages,
|
127 |
unnecessary spaces between each instance of a virama character and the next
|
128 |
character with a regex.
|
129 |
|
@@ -134,7 +135,7 @@ character with a regex.
|
|
134 |
### Myanmar Font Compatibility
|
135 |
|
136 |
Prior to 2019, the most popular font for Burmese websites was the Zawgyi font.
|
137 |
-
|
138 |
|
139 |
Several scripts, like the Chinese script, Tibetan script, and Thai, do not use
|
140 |
whitespace to separate characters. The languages with this property in this
|
@@ -148,67 +149,58 @@ see below.)
|
|
148 |
|
149 |
### Special filters
|
150 |
|
151 |
-
Chinese had a particular issue with pornographic content.
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
|
156 |
```
|
157 |
pornsignals = "caoporn caoprom caopron caoporen caoponrn caoponav caopom caoorn 99re dy888 caopro hezyo re99 4438x zooskool xfplay 7tav xxoo xoxo 52av freexx 91chinese anquye cao97 538porm 87fuli 91pron 91porn 26uuu 4438x 182tv kk4444 777me ae86 91av 720lu yy6080 6080yy qqchub paa97 aiai777 yy4480 videossexo 91free 一级特黄大片 偷拍久久国产视频 日本毛片免费视频观看 久久免费热在线精品 高清毛片在线看 日本毛片高清免费视频 一级黄色录像影片 亚洲男人天堂 久久精品视频在线看 自拍区偷拍亚洲视频 亚洲人成视频在线播放 色姑娘综合站 丁香五月啪啪 在线视频成人社区 亚洲人成视频在线播放 久久国产自偷拍 一本道 大香蕉无码 香港经典三级 亚洲成在人线免费视频 天天色综合网 大香蕉伊人久草 欧美一级高清片 天天鲁夜夜啪视频在线 免费黄片视频在线观看 加比勒久久综合 久草热久草在线视频 韩国三级片大全在线观看 青青草在线视频 美国一级毛片 久草在线福利资源 啪啪啪视频在线观看免费 成人福利视频在线观看 婷婷我去也 老司机在线国产 久久成人视频 手机看片福利永久国产 高清国产偷拍在线 大香蕉在线影院 日本高清免费一本视频 男人的天堂东京热 影音先锋男人资源 五月婷婷开心中文字幕 亚洲香蕉视频在线播放 天天啪久久爱视频精品 超碰久久人人摸人人搞".split()
|
158 |
```
|
159 |
|
160 |
-
## Language code notes
|
161 |
-
|
162 |
-
All these different datasets have slightly different language codes! We use the
|
163 |
-
BCP-47 standard, which specifies the 2-letter ISO-693-1 code when applicable,
|
164 |
-
and otherwise the ISO-693-3 code. Script tags and region tags are omitted when
|
165 |
-
they are defined as the default value by CLDR, and otherwise included; for
|
166 |
-
instance `ks` refers to Kashmiri in Nastaliq/Arabic script (CLDR default),
|
167 |
-
whereas `ks_Deva` refers to Kashmiri in Devanagari.
|
168 |
-
|
169 |
A few more random notes, comparing to common alternative codes for these
|
170 |
languages:
|
171 |
|
172 |
-
*
|
173 |
-
*
|
174 |
-
* Unfortunately
|
175 |
correct `mhr`), and `mrj` for Hill Mari
|
176 |
-
*
|
177 |
`nb`
|
178 |
-
*
|
179 |
-
*
|
180 |
-
*
|
181 |
Gheg (`aln`) and Tosk (`als`)
|
182 |
-
*
|
183 |
speakers opining that the dialect distinctions are not significant. Other
|
184 |
resources use the individual codes like `tzm` and `kab`.
|
185 |
-
*
|
186 |
a mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may
|
187 |
use the dialect code, e.g. `quy` for Ayacucho Chanka. The same is true for a
|
188 |
few other macro codes, like `ff` (Macro code for Fulfulde, whereas other
|
189 |
sources may use e.g. `fuv`.)
|
190 |
* Really, there are notes that can be made about almost any code, from the
|
191 |
well-accepted conventions like `zh` for Mandarin, to many dialectical notes,
|
192 |
-
like which variant of Hmong really is the `hmn` data? But
|
193 |
-
made specifically for ones where
|
194 |
out there that use different conventions.
|
195 |
|
|
|
196 |
## Audit
|
197 |
|
198 |
-
Following [Quality at a Glance](https://arxiv.org/abs/2103.12028),
|
199 |
-
an "audit" of every corpus in this dataset. Although
|
200 |
-
languages,
|
201 |
looked at a sample of 20 documents of each language.
|
202 |
|
203 |
-
After an initial round of auditing,
|
204 |
-
them.
|
205 |
|
206 |
### Overall notes from the audit
|
207 |
|
208 |
-
|
209 |
that was clearly majority noise, or only had 20 or fewer docs.** This is a low
|
210 |
-
bar -- twenty documents can be very little indeed, and some of the corpora
|
211 |
-
release are quite noisy, but all of them should have at least the potential to
|
212 |
be used in some useful way. The motivation for not releasing nonsense or tiny
|
213 |
datasets is to not give a false sense of how multilingual this dataset actually
|
214 |
is ("Representation washing"), as recommended by **Quality at a Glance**.
|
@@ -222,6 +214,7 @@ A few overarching points:
|
|
222 |
* Indian languages in the Latin script had a high concentration of
|
223 |
pornographic content.
|
224 |
|
|
|
225 |
### Renames and Merges as a result of the Audit
|
226 |
|
227 |
In several cases, it was clear from the audit that the corpora were not in the
|
@@ -238,7 +231,7 @@ renames:
|
|
238 |
* `bjj` merged into the `awa` dataset
|
239 |
|
240 |
## Canaries
|
241 |
-
|
242 |
|
243 |
* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the `translate_copy` setting in the paper, where the source and target language match.
|
244 |
|
@@ -247,7 +240,7 @@ We provide canaries in separate `canaries` folder. Canaries are organized into t
|
|
247 |
Within each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The `generic` folder contains within it the four canary types.
|
248 |
|
249 |
|
250 |
-
Canaries can be mixed in with normal training data to then be analyzed post-hoc to training
|
251 |
|
252 |
|
253 |
## References
|
@@ -265,7 +258,7 @@ This data is released with the `CC-BY-4.0` license.
|
|
265 |
|
266 |
## Detailed notes from the audit
|
267 |
|
268 |
-
Here are the notes
|
269 |
found, and the final decision made with respect to including the language in
|
270 |
this dataset.
|
271 |
|
|
|
12 |
|
13 |
[MADLAD-400 (*Multilingual Audited Dataset: Low-resource And Document-level*)](https://arxiv.org/abs/2309.04662) is
|
14 |
a document-level multilingual dataset based on Common Crawl, covering 419
|
15 |
+
languages in total. This uses all snapshots of CommonCrawl available as of August
|
16 |
1, 2022. The primary advantage of this dataset over similar datasets is that it
|
17 |
is more multilingual (419 languages), it is audited and more highly filtered,
|
18 |
and it is document-level. The main disadvantage is also its strength -- being
|
19 |
+
more filtered, it may lack the recall needed for some applications.
|
20 |
+
|
21 |
|
22 |
There are two versions released: the **noisy** dataset, which has no filtering
|
23 |
except document-level LangID, and the **clean** dataset, which has a variety of
|
|
|
58 |
|
59 |
## Filtering
|
60 |
|
61 |
+
Before separating the raw CommonCrawl corpus by LangID, these
|
62 |
+
filtering steps are done, similar to Raffel et al (2020):
|
63 |
|
64 |
- Discarded any page with fewer than 5 sentences and only retained lines that
|
65 |
contained at least 3 words.
|
|
|
67 |
- Removed any page where the phrase “lorem ipsum” appeared.
|
68 |
- Removed any pages containing the phrases "terms of use", "privacy policy",
|
69 |
"cookie policy", "uses cookies", "use of cookies", "use cookies"
|
70 |
+
- Removed any pages that contained a curly bracket.
|
71 |
+
- To deduplicate the data set, discarded all but one of any three-sentence span occurring more than once in the data set.
|
72 |
|
73 |
The `noisy` subset of the data was filtered only by document-level LangID, which
|
74 |
was taken to be the majority sentence-level LangID prediction. The `clean`
|
|
|
91 |
|
92 |
### Cursed Substrings
|
93 |
|
94 |
+
Based on the initial round of data audits, the authors created a heuristic list of
|
95 |
substrings and regexes accounting for a large amount of questionable content.
|
96 |
Keep in mind that these all are fed into the `pct_questionable` score -- a
|
97 |
sentence is only excluded from the `clean` dataset if over 20% of the sentences
|
|
|
119 |
|
120 |
Many languages using Brahmic Abugida (South and Southeast Asian scripts like
|
121 |
Devanagari, Khmer, etc.) use some variant on the virama character. For whatever
|
122 |
+
reason, it was found that this character was often messed up in the common crawl
|
123 |
+
snapshots used. Therefore, for the languages `bn my pa gu or ta te kn ml
|
124 |
si th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks_Deva mag mni shn yue zh
|
125 |
+
ja kjg mnw ksw rki mtr mwr xnr`, a special correction step was done.
|
126 |
|
127 |
+
For these languages, the authors took the list of all virama characters and removed all
|
128 |
unnecessary spaces between each instance of a virama character and the next
|
129 |
character with a regex.
|
130 |
|
|
|
135 |
### Myanmar Font Compatibility
|
136 |
|
137 |
Prior to 2019, the most popular font for Burmese websites was the Zawgyi font.
|
138 |
+
The authors used [Myanmar Tools](https://github.com/google/myanmar-tools) to convert text.
|
139 |
|
140 |
Several scripts, like the Chinese script, Tibetan script, and Thai, do not use
|
141 |
whitespace to separate characters. The languages with this property in this
|
|
|
149 |
|
150 |
### Special filters
|
151 |
|
152 |
+
Chinese had a particular issue with pornographic content. After manual inspection
|
153 |
+
a list of strings likely to be present in pornographic content was developed. All
|
154 |
+
pages containing at least one of these strings were removed. Resulted in 17%
|
155 |
+
reduction in number of documents and 56% reduction in file size.
|
156 |
|
157 |
```
|
158 |
pornsignals = "caoporn caoprom caopron caoporen caoponrn caoponav caopom caoorn 99re dy888 caopro hezyo re99 4438x zooskool xfplay 7tav xxoo xoxo 52av freexx 91chinese anquye cao97 538porm 87fuli 91pron 91porn 26uuu 4438x 182tv kk4444 777me ae86 91av 720lu yy6080 6080yy qqchub paa97 aiai777 yy4480 videossexo 91free 一级特黄大片 偷拍久久国产视频 日本毛片免费视频观看 久久免费热在线精品 高清毛片在线看 日本毛片高清免费视频 一级黄色录像影片 亚洲男人天堂 久久精品视频在线看 自拍区偷拍亚洲视频 亚洲人成视频在线播放 色姑娘综合站 丁香五月啪啪 在线视频成人社区 亚洲人成视频在线播放 久久国产自偷拍 一本道 大香蕉无码 香港经典三级 亚洲成在人线免费视频 天天色综合网 大香蕉伊人久草 欧美一级高清片 天天鲁夜夜啪视频在线 免费黄片视频在线观看 加比勒久久综合 久草热久草在线视频 韩国三级片大全在线观看 青青草在线视频 美国一级毛片 久草在线福利资源 啪啪啪视频在线观看免费 成人福利视频在线观看 婷婷我去也 老司机在线国产 久久成人视频 手机看片福利永久国产 高清国产偷拍在线 大香蕉在线影院 日本高清免费一本视频 男人的天堂东京热 影音先锋男人资源 五月婷婷开心中文字幕 亚洲香蕉视频在线播放 天天啪久久爱视频精品 超碰久久人人摸人人搞".split()
|
159 |
```
|
160 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
161 |
A few more random notes, comparing to common alternative codes for these
|
162 |
languages:
|
163 |
|
164 |
+
* `fil` for Filipino/Tagalog, not `tl`
|
165 |
+
* `ak` for Twi/Akan, rather than `tw`. This includes Fante.
|
166 |
+
* Unfortunately use the macro code `chm` for Meadow Mari (instead of the
|
167 |
correct `mhr`), and `mrj` for Hill Mari
|
168 |
+
* `no` for Norwegian Bokmål, whereas some resources use
|
169 |
`nb`
|
170 |
+
* `ps` for Pashto instead of `pbt` (Southern Pashto)
|
171 |
+
* `ms` for Standard Malay, not `zlm`
|
172 |
+
* `sq` for Albanian, and don't distinguish dialects like
|
173 |
Gheg (`aln`) and Tosk (`als`)
|
174 |
+
* `ber` as the code for Tamazight, after consultation with Tamazight
|
175 |
speakers opining that the dialect distinctions are not significant. Other
|
176 |
resources use the individual codes like `tzm` and `kab`.
|
177 |
+
* Macrocode `qu` for Quechua. In practice, this seems usually to be
|
178 |
a mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may
|
179 |
use the dialect code, e.g. `quy` for Ayacucho Chanka. The same is true for a
|
180 |
few other macro codes, like `ff` (Macro code for Fulfulde, whereas other
|
181 |
sources may use e.g. `fuv`.)
|
182 |
* Really, there are notes that can be made about almost any code, from the
|
183 |
well-accepted conventions like `zh` for Mandarin, to many dialectical notes,
|
184 |
+
like which variant of Hmong really is the `hmn` data? But the above ones are
|
185 |
+
made specifically for ones where the authors are aware of other datasources floating
|
186 |
out there that use different conventions.
|
187 |
|
188 |
+
|
189 |
## Audit
|
190 |
|
191 |
+
Following [Quality at a Glance](https://arxiv.org/abs/2103.12028), the authors performed
|
192 |
+
an "audit" of every corpus in this dataset. Although the authors did not speak most
|
193 |
+
languages, they were able to give high-level comments on the general quality. They
|
194 |
looked at a sample of 20 documents of each language.
|
195 |
|
196 |
+
After an initial round of auditing, they devised a new set of filters and applied
|
197 |
+
them. They then re-did all audits.
|
198 |
|
199 |
### Overall notes from the audit
|
200 |
|
201 |
+
The decision was to **include languages that looked noisy, but omit any language
|
202 |
that was clearly majority noise, or only had 20 or fewer docs.** This is a low
|
203 |
+
bar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to
|
|
|
204 |
be used in some useful way. The motivation for not releasing nonsense or tiny
|
205 |
datasets is to not give a false sense of how multilingual this dataset actually
|
206 |
is ("Representation washing"), as recommended by **Quality at a Glance**.
|
|
|
214 |
* Indian languages in the Latin script had a high concentration of
|
215 |
pornographic content.
|
216 |
|
217 |
+
|
218 |
### Renames and Merges as a result of the Audit
|
219 |
|
220 |
In several cases, it was clear from the audit that the corpora were not in the
|
|
|
231 |
* `bjj` merged into the `awa` dataset
|
232 |
|
233 |
## Canaries
|
234 |
+
Canaries are provided in separate `canaries` folder. Canaries are organized into three directions: `monolingual` hosts canaries designed for the MADLAD-400 monody data, `multiway` for the multiway data, and `generic` the generic canaries generated only from the model's vocabulary.
|
235 |
|
236 |
* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the `translate_copy` setting in the paper, where the source and target language match.
|
237 |
|
|
|
240 |
Within each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The `generic` folder contains within it the four canary types.
|
241 |
|
242 |
|
243 |
+
Canaries can be mixed in with normal training data to then be analyzed post-hoc to training
|
244 |
|
245 |
|
246 |
## References
|
|
|
258 |
|
259 |
## Detailed notes from the audit
|
260 |
|
261 |
+
Here are the notes on all languages, along with the number of documents
|
262 |
found, and the final decision made with respect to including the language in
|
263 |
this dataset.
|
264 |
|