Update README.md
Browse files
README.md
CHANGED
@@ -28,19 +28,19 @@ dataset is released in a document-level form that has been deduplicated.
|
|
28 |
You can load both the clean and noisy versions of any language by specifing its LangID:
|
29 |
|
30 |
~~~
|
31 |
-
|
32 |
~~~
|
33 |
|
34 |
A list of langagues can also be supplied with a keyword argument:
|
35 |
|
36 |
~~~
|
37 |
-
madlad_multilang = load_dataset("allenai/madlad-400", languages=["
|
38 |
~~~
|
39 |
|
40 |
Additionally, you can load the noisy and clean subsets seperately with the split keyword argument:
|
41 |
|
42 |
~~~
|
43 |
-
madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["
|
44 |
~~~
|
45 |
|
46 |
|
@@ -951,9 +951,9 @@ A few comments too long to fit in the table above:
|
|
951 |
The number of documents, sentences, tokens, characters, and bytes for the noisy
|
952 |
and clean splits of the data. Note that the "toks" field below uses whitespace
|
953 |
for tokenization, so is not appropriate for non-whitespace-separating languages
|
954 |
-
like Chinese (see section above). Note that the english subset in this version
|
955 |
is missing 18% of documents that were included in the published analysis of the dataset.
|
956 |
-
These documents will be
|
957 |
|
958 |
BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy |
|
959 |
----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------|
|
|
|
28 |
You can load both the clean and noisy versions of any language by specifing its LangID:
|
29 |
|
30 |
~~~
|
31 |
+
madlad_abt = load_dataset("allenai/madlad-400", "abt")
|
32 |
~~~
|
33 |
|
34 |
A list of langagues can also be supplied with a keyword argument:
|
35 |
|
36 |
~~~
|
37 |
+
madlad_multilang = load_dataset("allenai/madlad-400", languages=["abt", "ace"])
|
38 |
~~~
|
39 |
|
40 |
Additionally, you can load the noisy and clean subsets seperately with the split keyword argument:
|
41 |
|
42 |
~~~
|
43 |
+
madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["abt", "ace"], split="clean")
|
44 |
~~~
|
45 |
|
46 |
|
|
|
951 |
The number of documents, sentences, tokens, characters, and bytes for the noisy
|
952 |
and clean splits of the data. Note that the "toks" field below uses whitespace
|
953 |
for tokenization, so is not appropriate for non-whitespace-separating languages
|
954 |
+
like Chinese (see section above). Note that the english subset in this version
|
955 |
is missing 18% of documents that were included in the published analysis of the dataset.
|
956 |
+
These documents will be incoporated in an update coming soon.
|
957 |
|
958 |
BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy |
|
959 |
----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------|
|