metadata
language:
- en
- fr
- de
- es
- tr
multilinguality:
- multilingual
configs:
- config_name: en
data_files:
- split: train
path: RELX_distant_en.json
- config_name: fr
data_files:
- split: train
path: RELX_distant_fr.json
- config_name: de
data_files:
- split: train
path: RELX_distant_de.json
- config_name: es
data_files:
- split: train
path: RELX_distant_es.json
- config_name: tr
data_files:
- split: train
path: RELX_distant_tr.json
Dataset origin: https://github.com/boun-tabi/RELX
RELX-Distant
This dataset is gathered from Wikipedia and Wikidata. The process is as follows:
- The Wikipedia dumps for the corresponding languages are downloaded and converted into raw documents with Wikipedia hyperlinks in entities.
- The raw documents are split into sentences with spaCy (Honnibal and Montani, 2017), and all hyperlinks are converted to their corresponding Wikidata IDs.
- Sentences that include entity pairs with Wikidata relations (Vrandečić and Krötzsch, 2014) are collected. We filter and combine some of the relations and propose RELX-Distant whose statistics can be seen in the table below.
Language | Number of Sentences |
---|---|
English | 815689 |
French | 652842 |
German | 652062 |
Spanish | 397875 |
Turkish | 57114 |
Citation
@inproceedings{koksal-ozgur-2020-relx,
title = "The {RELX} Dataset and Matching the Multilingual Blanks for Cross-Lingual Relation Classification",
author = {K{\"o}ksal, Abdullatif and
{\"O}zg{\"u}r, Arzucan},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.32",
doi = "10.18653/v1/2020.findings-emnlp.32",
pages = "340--350",
}