Datasets:
Edited the Acknowledgement
Browse files
README.md
CHANGED
@@ -34,6 +34,7 @@ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/
|
|
34 |
abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
|
35 |
}
|
36 |
```
|
37 |
-
###
|
|
|
38 |
|
39 |
-
We thank the NLLB Meta AI team for open sourcing the
|
|
|
34 |
abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
|
35 |
}
|
36 |
```
|
37 |
+
### Acknowledgement
|
38 |
+
This work was funded by the Google Award for Inclusion Research (AIR) 2022.
|
39 |
|
40 |
+
We thank the NLLB Meta AI team for open sourcing the dataset. We also thank the AllenNLP team at AI2 for hosting and releasing this data.
|