Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
NQ-Swap / README.md
younanna's picture
Update README.md
bc50bb1 verified
metadata
license: other
license_name: apple
license_link: LICENSE
dataset_info:
  features:
    - name: id
      dtype: string
    - name: question
      dtype: string
    - name: original_context
      dtype: string
    - name: original_answers
      sequence: string
    - name: substituted_context
      dtype: string
    - name: substituted_answers
      sequence: string
    - name: substitution_type
      dtype: string
  splits:
    - name: dev
      num_bytes: 8106521
      num_examples: 5510
  download_size: 3718124
  dataset_size: 16213042
configs:
  - config_name: default
    data_files:
      - split: dev
        path: data/dev-*

Reference

This dataset is the reproduced version of "Entity-Based Knowledge Conflicts in Question Answering" dataset.

@inproceedings{longpre-etal-2021-entity,
    title = "Entity-Based Knowledge Conflicts in Question Answering",
    author = "Longpre, Shayne  and
      Perisetla, Kartik  and
      Chen, Anthony  and
      Ramesh, Nikhil  and
      DuBois, Chris  and
      Singh, Sameer",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.565",
    pages = "7052--7063",
}

Dataset that was used

Among the datasets provided by MRQA Shared Task 2019, we employ the dev split from Natural Questions.

wget.download("https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/NaturalQuestionsShort.jsonl.gz", "destination_dir/NaturalQuestionsShort.jsonl.gz")

For the convenience of our analysis, we have filtered out

  • duplicate QA examples which have identical (question, context) pairs
  • QA examples whose context exceeds 400 words

Downloading our Dataset

# loading dataset
from datasets import load_dataset
dataset = load_dataset("younanna/NQ-Swap")

Data Fields

  • "id": The identifier (string) of each QA example
  • "question": The question in natural language
  • "original_context": The context from the original "Natural Questions" dataset
  • "original_answers": The gold answers based on the information in "original_context"
  • "substituted_context": The context obtained by replacing all occurrences of "original_answer" in "original_context", to "substituted_answer"
  • "substituted_answers": The result of substitution performed on "original_answers". The types of substitutions are explained in Section 2.2 of the paper.
  • "substitution_type": The type of substitution that has been applied