Datasets:
prettifying the citations
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/
|
|
14 |
|
15 |
|
16 |
### Citation Information
|
17 |
-
|
18 |
@inproceedings{ranathunga-etal-2024-quality,
|
19 |
title = "Quality Does Matter: A Detailed Look at the Quality and Utility of Web-Mined Parallel Corpora",
|
20 |
author = "Ranathunga, Surangika and
|
@@ -33,7 +33,7 @@ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/
|
|
33 |
pages = "860--880",
|
34 |
abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
|
35 |
}
|
36 |
-
|
37 |
### Contributions
|
38 |
|
39 |
We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection).
|
|
|
14 |
|
15 |
|
16 |
### Citation Information
|
17 |
+
```
|
18 |
@inproceedings{ranathunga-etal-2024-quality,
|
19 |
title = "Quality Does Matter: A Detailed Look at the Quality and Utility of Web-Mined Parallel Corpora",
|
20 |
author = "Ranathunga, Surangika and
|
|
|
33 |
pages = "860--880",
|
34 |
abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
|
35 |
}
|
36 |
+
```
|
37 |
### Contributions
|
38 |
|
39 |
We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection).
|