File size: 2,833 Bytes
99a88c6 73a751a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
license: other
license_name: apple
license_link: LICENSE
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: original_context
dtype: string
- name: original_answers
sequence: string
- name: substituted_context
dtype: string
- name: substituted_answers
sequence: string
- name: substitution_type
dtype: string
splits:
- name: dev
num_bytes: 8106521
num_examples: 5510
download_size: 3718124
dataset_size: 16213042
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Reference
This dataset is the reproduced version of ["Entity-Based Knowledge Conflicts in Question Answering"](https://arxiv.org/abs/2109.05052) dataset.
```bib
@inproceedings{longpre-etal-2021-entity,
title = "Entity-Based Knowledge Conflicts in Question Answering",
author = "Longpre, Shayne and
Perisetla, Kartik and
Chen, Anthony and
Ramesh, Nikhil and
DuBois, Chris and
Singh, Sameer",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.565",
pages = "7052--7063",
}
```
### Dataset that was used
Among the datasets provided by [MRQA Shared Task 2019](https://github.com/mrqa/MRQA-Shared-Task-2019), we employ the `dev` split from [Natural Questions](https://research.google/pubs/natural-questions-a-benchmark-for-question-answering-research/).
```python
wget.download("https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/NaturalQuestionsShort.jsonl.gz", "destination_dir/NaturalQuestionsShort.jsonl.gz")
```
For the convenience of our analysis, we have filtered out
- duplicate QA examples which have identical (question, context) pairs
- QA examples whose context exceeds 400 words
# Downloading our Dataset
```python
# loading dataset
from datasets import load_dataset
dataset = load_dataset("younanna/NQ-Swap")
```
# Data Fields
- "id": The identifier (string) of each QA example
- "question": The question in natural language
- "original_context": The context from the original "Natural Questions" dataset
- "original_answers": The gold answers based on the information in "original_context"
- "substituted_context": The context obtained by replacing all occurrences of "original_answer" in "original_context", to "substituted_answer"
- "substituted_answers": The result of substitution performed on "original_answers". The types of substitutions are explained in [Section 2.2 of the paper](https://arxiv.org/pdf/2109.05052#page=2.45).
- "substitution_type": The type of substitution that has been applied
|