Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
ArXiv:
Update README.md
Browse files
README.md
CHANGED
@@ -18,4 +18,17 @@ configs:
|
|
18 |
data_files:
|
19 |
- split: train
|
20 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
data_files:
|
19 |
- split: train
|
20 |
path: data/train-*
|
21 |
+
task_categories:
|
22 |
+
- question-answering
|
23 |
+
language:
|
24 |
+
- en
|
25 |
+
size_categories:
|
26 |
+
- 10M<n<100M
|
27 |
---
|
28 |
+
# Wikipedia Dump without Duplicates
|
29 |
+
|
30 |
+
## Dataset Summary
|
31 |
+
This is a cleaned and de-duplicated version of the English Wikipedia dump dated December 20, 2018. Originally sourced from the [DPR repository](https://github.com/facebookresearch/DPR), it has been processed to remove duplicates, resulting in a final count of **20,970,784** passages, each consisting of 100 words.
|
32 |
+
The original corpus is available for download via this [link](https://dl.fbaipublicfiles.com/dpr/wikipedia_split/psgs_w100.tsv.gz).
|
33 |
+
|
34 |
+
The corpus is used in the research paper [A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG Systems](https://arxiv.org/abs/2406.14972), supporting experiments comparing base and instruct Large Language Models within Retrieval-Augmented Generation systems.
|