Datasets:

Modalities:
Text
Formats:
csv
Libraries:
Datasets
pandas
License:
velmen commited on
Commit
3ed7d8a
1 Parent(s): c2a59bb

Edited the acknowledgement

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -34,6 +34,7 @@ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/
34
  abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
35
  }
36
  ```
37
- ### Contributions
 
38
 
39
- We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection).
 
34
  abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
35
  }
36
  ```
37
+ ### Acknowledgement
38
+ This work was funded by the Google Award for Inclusion Research (AIR) 2022.
39
 
40
+ We thank the NLLB Meta AI team for open sourcing the dataset. We also thank the AllenNLP team at AI2 for hosting and releasing this data.