Datasets:
File size: 2,929 Bytes
f24b4e5 c8180ff f24b4e5 2970d52 f24b4e5 2970d52 f24b4e5 c8180ff 320b194 c8180ff d92a4d2 c8180ff b50e7e5 c8180ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
language:
- en
- multilingual
size_categories:
- 10M<n<100M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: WikiTitles
tags:
- sentence-transformers
dataset_info:
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 755332378
num_examples: 14700458
download_size: 685053033
dataset_size: 755332378
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Parallel Sentences - WikiTitles
This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. Most of the sentences originate from the [OPUS website](https://opus.nlpl.eu/).
In particular, this dataset contains the [WikiTitles](https://huggingface.co./datasets/sentence-transformers/parallel-sentences) dataset.
## Related Datasets
The following datasets are also a part of the Parallel Sentences collection:
* [parallel-sentences-europarl](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-europarl)
* [parallel-sentences-global-voices](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-global-voices)
* [parallel-sentences-muse](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-muse)
* [parallel-sentences-jw300](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-jw300)
* [parallel-sentences-news-commentary](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-news-commentary)
* [parallel-sentences-opensubtitles](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-opensubtitles)
* [parallel-sentences-talks](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-talks)
* [parallel-sentences-tatoeba](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-tatoeba)
* [parallel-sentences-wikimatrix](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-wikimatrix)
* [parallel-sentences-wikititles](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-wikititles)
* [parallel-sentences-ccmatrix](https://huggingface.co./datasets/sentence-transformers/parallel-sentences-ccmatrix)
These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html).
## Dataset Stats
* Columns: "english", "non_english"
* Column types: `str`, `str`
* Examples:
```python
{
"english": "Hossain Toufique Imam",
"non_english": "হোসেন তৌফিক ইমাম"
}
```
* Collection strategy: Processing the raw data from [parallel-sentences](https://huggingface.co./datasets/sentence-transformers/parallel-sentences) and formatting it in Parquet.
* Deduplified: No |