RELX-Distant / README.md
de-francophones's picture
Create README.md
667340e verified
|
raw
history blame
2.04 kB
metadata
language:
  - en
  - fr
  - de
  - es
  - tr
configs:
  - config_name: en
    data_files:
      - split: train
        path: RELX_en.json
  - config_name: fr
    data_files:
      - split: train
        path: RELX_fr.json
  - config_name: de
    data_files:
      - split: train
        path: RELX_de.json
  - config_name: es
    data_files:
      - split: train
        path: RELX_es.json
  - config_name: tr
    data_files:
      - split: train
        path: RELX_tr.json

Dataset origin: https://github.com/boun-tabi/RELX

RELX-Distant

This dataset is gathered from Wikipedia and Wikidata. The process is as follows:

  1. The Wikipedia dumps for the corresponding languages are downloaded and converted into raw documents with Wikipedia hyperlinks in entities.
  2. The raw documents are split into sentences with spaCy (Honnibal and Montani, 2017), and all hyperlinks are converted to their corresponding Wikidata IDs.
  3. Sentences that include entity pairs with Wikidata relations (Vrandečić and Krötzsch, 2014) are collected. We filter and combine some of the relations and propose RELX-Distant whose statistics can be seen in the table below.
Language Number of Sentences
English 815689
French 652842
German 652062
Spanish 397875
Turkish 57114

Citation

@inproceedings{koksal-ozgur-2020-relx,
    title = "The {RELX} Dataset and Matching the Multilingual Blanks for Cross-Lingual Relation Classification",
    author = {K{\"o}ksal, Abdullatif  and
      {\"O}zg{\"u}r, Arzucan},
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.findings-emnlp.32",
    doi = "10.18653/v1/2020.findings-emnlp.32",
    pages = "340--350",
}