Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
File size: 4,648 Bytes
d289aa6
8161091
 
 
 
761c269
8161091
 
 
 
 
 
761c269
d289aa6
8161091
 
 
 
 
9e14b3d
 
8161091
 
e80e4e7
8161091
 
 
 
761c269
 
8161091
 
 
761c269
8161091
 
 
9e14b3d
 
ae9d2ee
 
8a6eff5
 
ae9d2ee
8a6eff5
ae9d2ee
 
 
8161091
 
9e14b3d
 
 
 
8161091
 
 
 
 
 
761c269
 
 
 
8161091
 
761c269
 
8a6eff5
ae9d2ee
e80e4e7
 
 
 
 
8a6eff5
 
 
 
 
 
 
 
 
761c269
8a6eff5
761c269
 
8161091
761c269
8161091
8a6eff5
8161091
761c269
 
 
 
 
 
 
 
 
8161091
 
 
14b285c
 
8161091
 
 
 
 
761c269
8161091
 
 
761c269
 
8161091
 
 
 
 
 
 
761c269
 
 
8161091
249e929
8161091
 
761c269
 
8161091
 
761c269
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
language:
- ca
- de
multilinguality:
- multilingual
pretty_name: CA-DE Parallel Corpus
size_categories:
- 1M<n<10M
task_categories:
- translation
task_ids: []
license: cc-by-nc-sa-4.0
---

# Dataset Card for CA-DE Parallel Corpus

## Dataset Description

- **Point of Contact:** [email protected]

### Dataset Summary

The CA-DE Parallel Corpus is a Catalan-German dataset of parallel sentences created to support Catalan in NLP tasks, specifically 
Machine Translation.

### Supported Tasks and Leaderboards

The dataset can be used to train Bilingual Machine Translation models between German and Catalan in any direction, 
as well as Multilingual Machine Translation models.

### Languages

The sentences included in the dataset are in Catalan (CA) and German (DE).

## Dataset Structure

### Data Instances

Two separate txt files are provided with the sentences sorted in the same order:

- ca-de_6M.ca
- ca-de_6M.de

The dataset is additionally provided in parquet format: ca-de_6M.parquet.

The parquet file contains two columns of parallel text obtained from the two original text files. 
Each row in the file represents a pair of parallel sentences in the two languages of the dataset.


### Data Fields

[N/A]
    
### Data Splits

The dataset contains a single split: `train`.

## Dataset Creation

### Curation Rationale

This dataset is aimed at promoting the development of Machine Translation between Catalan and other languages, specifically German.

### Source Data

#### Initial Data Collection and Normalization

<!-- The first portion of the corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/): 
MultiCCAligned, WikiMatrix, GNOME, KDE4, OpenSubtitles, GlobalVoices, Tatoeba.

Additionally, the corpus contains synthetic parallel data generated from the original Spanish-Catalan Europarl and Tilde corpora 
made public by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).

A last portion of the dataset is composed by synthetic parallel data generated from a random sampling of the Spanish-German corpora 
available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co./PlanTL-GOB-ES/mt-plantl-es-ca) model. -->

The corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/): 
MultiCCAligned, WikiMatrix, GNOME, KDE4, OpenSubtitles, GlobalVoices, Tatoeba.

All data was filtered according to two specific criteria:
- Alignment: sentence level alignments were calculated using [LaBSE](https://huggingface.co./sentence-transformers/LaBSE) and sentence pairs with a score below 0.75 were discarded.

- Language identification: the probability of being the target language was calculated using [Lingua.py](https://github.com/pemistahl/lingua-py) and sentences with a language probability score below 0.5 were discarded.

The filtered datasets are then concatenated and deduplicated to form the final corpus.

#### Who are the source language producers?

[Opus](https://opus.nlpl.eu/) 

<!-- [SoftCatalà](https://github.com/Softcatala/Europarl-catalan) -->

### Annotations

#### Annotation process

The dataset does not contain any annotations.

#### Who are the annotators?

[N/A]

### Personal and Sensitive Information

Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied, 
personal and sensitive information may be present in the data. This needs to be considered when using the data for training models.

## Considerations for Using the Data

### Social Impact of Dataset

By providing this resource, we intend to promote the use of Catalan across NLP tasks, thereby improving the accessibility and visibility of the Catalan language.

### Discussion of Biases

No specific bias mitigation strategies were applied to this dataset. 
Inherent biases may exist within the data.

### Other Known Limitations

The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.

## Additional Information

### Dataset Curators

Language Technologies Unit at the Barcelona Supercomputing Center ([email protected]).

This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).


### Licensing Information

This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).

### Citation Information

[N/A]

### Contributions

[N/A]