File size: 2,326 Bytes
ba3f342
 
 
 
 
 
 
 
 
 
cc413ee
 
 
 
 
 
a7b9248
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc413ee
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: odc-by
task_categories:
- translation
language:
- en
- si
size_categories:
- 10K<n<100K
---
### Licensing Information

The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound to the respective Terms of Use and License of the original source.


### Citation Information
```
@inproceedings{ranathunga-etal-2024-quality,
    title = "Quality Does Matter: A Detailed Look at the Quality and Utility of Web-Mined Parallel Corpora",
    author = "Ranathunga, Surangika  and
      De Silva, Nisansa  and
      Menan, Velayuthan  and
      Fernando, Aloka  and
      Rathnayake, Charitha",
    editor = "Graham, Yvette  and
      Purver, Matthew",
    booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = mar,
    year = "2024",
    address = "St. Julian{'}s, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.eacl-long.52",
    pages = "860--880",
    abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
}
```
### Contributions

We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to  Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection).