The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for PolQA Dataset

Dataset Summary

PolQA is the first Polish dataset for open-domain question answering. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7 million candidate passages. The dataset can be used to train both a passage retriever and an abstractive reader.

Supported Tasks and Leaderboards

  • open-domain-qa: The dataset can be used to train a model for open-domain question answering. Success on this task is typically measured using metric defined during PolEval 2021.
  • document-retrieval: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by top-k retrieval accuracy or NDCG.
  • abstractive-qa: The dataset can be used to train a model for abstractive question answering. Success on this task is typically measured using metric defined during PolEval 2021.

Languages

The text is in Polish, as spoken by the host of the Jeden z Dziesięciu TV show (questions) and Polish Wikipedia editors (passages). The BCP-47 code for Polish is pl-PL.

Dataset Structure

Data Instances

The main part of the dataset consists of manually annotated question-passage pairs. For each instance, there is a question, a passage (passage_id, passage_title, passage_text), and a boolean indicator if the passage is relevant for the given question (i.e. does it contain the answers).

For each question there is a list of possible answers formulated in a natural language, in a way a Polish speaker would answer the questions. It means that the answers might contain prepositions, be inflected, and contain punctuation. In some cases, the answer might have multiple correct variants, e.g. numbers are written as numerals and words, synonyms, abbreviations and their expansions.

Additionally, we provide a classification of each question-answer pair based on the question_formulation, the question_type, and the entity_type/entity_subtype, according to the taxonomy proposed by Maciej Ogrodniczuk and Piotr Przybyła (2021).

{
  'question_id': 6,
  'passage_title': 'Mumbaj',
  'passage_text': 'Mumbaj lub Bombaj (marathi मुंबई, trb.: Mumbaj; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim.',
  'passage_wiki': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.',
  'passage_id': '42609-0',
  'duplicate': False,
  'question': 'W którym państwie leży Bombaj?',
  'relevant': True,
  'annotated_by': 'Igor',
  'answers': "['w Indiach', 'Indie']",
  'question_formulation': 'QUESTION',
  'question_type': 'SINGLE ENTITY',
  'entity_type': 'NAMED',
  'entity_subtype': 'COUNTRY',
  'split': 'train',
  'passage_source': 'human'
}

The second part of the dataset is a corpus of Polish Wikipedia (March 2022 snapshot) passages. The raw Wikipedia snapshot was parsed using WikiExtractor and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.

{
  'id': '42609-0',
  'title': 'Mumbaj',
  'text': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.'
}

Data Fields

Question-passage pairs:

  • question_id: an integer id of the question
  • passage_title: a string containing the title of the Wikipedia article
  • passage_text: a string containing the passage text as extracted by the human annotator
  • passage_wiki: a string containing the passage text as it can be found in the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
  • passage_id: a string containing the id of the passage from the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
  • duplicate: a boolean flag representing whether a question-passage pair is duplicated in the dataset. This occurs when the same passage was found in multiple passage sources.
  • question: a string containing the question
  • relevant: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
  • annotated_by: a string containing the name of the annotator who verified the relevance of the pair
  • answers: a string containing a list of possible short answers to the question
  • question_formulation: a string containing a kind of expression used to request information. One of the following:
    • QUESTION, e.g. What is the name of the first letter of the Greek alphabet?
    • COMMAND, e.g. Expand the abbreviation ’CIA’.
    • COMPOUND, e.g. This French writer, born in the 19th century, is considered a pioneer of sci-fi literature. What is his name?
  • question_type: a string indicating what type of information is sought by the question. One of the following:
    • SINGLE ENTITY, e.g. Who is the hero in the Tomb Rider video game series?
    • MULTIPLE ENTITIES, e.g. Which two seas are linked by the Corinth Canal?
    • ENTITY CHOICE, e.g. Is "Sombrero" a type of dance, a hat, or a dish?
    • YES/NO, e.g. When the term of office of the Polish Sejm is terminated, does it apply to the Senate as well?
    • OTHER NAME, e.g. What was the nickname of Louis I, the King of the Franks?
    • GAP FILLING, e.g. Finish the proverb: "If you fly with the crows... ".
  • entity_type: a string containing a type of the sought entity. One of the following: NAMED, UNNAMED, or YES/NO.
  • entity_subtype: a string containing a subtype of the sought entity. Can take one of the 34 different values.
  • split: a string containing the split of the dataset. One of the following: train, valid, or test.
  • passage_source: a string containing the source of the passage. One of the following:
    • human: the passage was proposed by a human annotator using any internal (i.e. Wikipedia search) or external (e.g. Google) search engines and any keywords or queries they considered useful
    • hard-negatives: the passage was proposed using a neural retriever trained on the passages found by the human annotators
    • zero-shot: the passage was proposed by the BM25 retriever and re-ranked using multilingual cross-encoder

Corpus of passages:

  • id: a string representing the Wikipedia article id and the index of extracted passage. Matches the passage_id from the main part of the dataset.
  • title: a string containing the title of the Wikipedia article. Matches the passage_title from the main part of the dataset.
  • text: a string containing the passage text. Matches the passage_wiki from the main part of the dataset.

Data Splits

The questions are assigned into one of three splits: train, validation, and test. The validation and test questions are randomly sampled from the test-B dataset from the PolEval 2021 competition.

# questions # positive passages # negative passages
train 5,000 27,131 34,904
validation 1,000 5,839 6,927
test 1,000 5,938 6,786

Dataset Creation

Curation Rationale

The PolQA dataset was created to support and promote the research in the open-domain question answering for Polish. It also serves as a benchmark to evaluate OpenQA systems.

Source Data

Initial Data Collection and Normalization

The majority of questions come from two existing resources, the 6,000 questions from the PolEval 2021 shared task on QA and additional 1,000 questions gathered by one of the shared task participants. Originally, the questions come from collections associated with TV shows, both officially published and gathered online by their fans, as well as questions used in actual quiz competitions, on TV or online.

The evidence passages come from the Polish Wikipedia (March 2022 snapshot). The raw Wikipedia snapshot was parsed using WikiExtractor and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.

Who are the source language producers?

The questions come from various sources and their authors are unknown but are mostly analogous (or even identical) to questions asked during the Jeden z Dziesięciu TV show.

The passages were written by the editors of the Polish Wikipedia.

Annotations

Annotation process

Two approaches were used to annotate the question-passage pairs. Each of them consists of two phases: the retrieval of candidate passages and the manual verification of their relevance.

In the first approach, we asked annotators to use internal (i.e. Wikipedia search) or external (e.g. Google) search engines to find up to five relevant passages using any keywords or queries they consider useful (passage_source="human"). Based on those passages, we trained the neural retriever to extend the number of relevant passages, as well as to retrieve the hard negatives (passage_source="hard-negatives").

In the second approach, the passage candidates were proposed by the BM25 retriever and re-ranked using multilingual cross-encoder (passage_source="zero-shot").

In both cases, all proposed question-passage pairs were manually verified by the annotators.

We release the annotation guidelines here.

Who are the annotators?

The annotation team consisted of 16 annotators, all native Polish speakers, most of them having linguistic backgrounds and previous experience as an annotator.

Personal and Sensitive Information

The dataset does not contain any personal or sensitive information.

Considerations for Using the Data

Social Impact of Dataset

This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.

Discussion of Biases

The passages proposed by the hard-negative and zero-shot methods are bound to be easier to retrieve by retrievers since they were proposed by such. To mitigate this bias, we include the passages found by the human annotators in an unconstrained way (passage_source="human"). We hypothesize that it will result in more unbiased and diverse examples. Moreover, we asked the annotators to find not one but up to five passages, preferably from different articles to even further increase passage diversity.

Other Known Limitations

The PolQA dataset focuses on trivia questions which might limit its usefulness in real-world applications since neural retrievers generalize poorly to other domains.

Additional Information

Dataset Curators

The PolQA dataset was developed by Piotr Rybak, Piotr Przybyła, and Maciej Ogrodniczuk from the Institute of Computer Science, Polish Academy of Sciences.

This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.

Licensing Information

CC BY-SA 4.0

Citation Information

@inproceedings{rybak-etal-2024-polqa-polish,
    title = "{P}ol{QA}: {P}olish Question Answering Dataset",
    author = "Rybak, Piotr  and
      Przyby{\l}a, Piotr  and
      Ogrodniczuk, Maciej",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.1125",
    pages = "12846--12855",
    abstract = "Recently proposed systems for open-domain question answering (OpenQA) require large amounts of training data to achieve state-of-the-art performance. However, data annotation is known to be time-consuming and therefore expensive to acquire. As a result, the appropriate datasets are available only for a handful of languages (mainly English and Chinese). In this work, we introduce and publicly release PolQA, the first Polish dataset for OpenQA. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7,097,322 candidate passages. Each question is classified according to its formulation, type, as well as entity type of the answer. This resource allows us to evaluate the impact of different annotation choices on the performance of the QA system and propose an efficient annotation strategy that increases the passage retrieval accuracy@10 by 10.55 p.p. while reducing the annotation cost by 82{\%}.",
}
Downloads last month
259

Models trained or fine-tuned on ipipan/polqa

Space using ipipan/polqa 1