Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
jchevallard commited on
Commit
dd199f3
·
1 Parent(s): 0e25589

Updated README

Browse files
Files changed (2) hide show
  1. README.md +8 -1
  2. figs/CRAG_table_2.png +3 -0
README.md CHANGED
@@ -26,8 +26,15 @@ pretty_name: CRA
26
 
27
  Datasets are taken from Facebook's [CRAG: Comprehensive RAG Benchmark](https://github.com/facebookresearch/CRAG), see their [arXiv paper](https://arxiv.org/abs/2406.04744) for details about the dataset construction.
28
 
 
 
 
 
 
 
 
29
  The datasets `crag_task_1_and_2_dev_v4_subsample_*.json.bz2` have been created from the dataset [crag_task_1_and_2_dev_v4.jsonl.bz2](https://github.com/facebookresearch/CRAG/raw/refs/heads/main/data/crag_task_1_and_2_dev_v4.jsonl.bz2?download=) available on CRAG's GitHub repository.
30
- For an easier handling and download of the dataset, we have split the 2706 rows of the original file in 5 subsamples, following the procedure below:
31
  1. We have created a new label `answer_type`, classifying the answers in 3 categories:
32
  - `invalid` for any answer == "invalid question"
33
  - `no_answer` for any answer == "i don't know"
 
26
 
27
  Datasets are taken from Facebook's [CRAG: Comprehensive RAG Benchmark](https://github.com/facebookresearch/CRAG), see their [arXiv paper](https://arxiv.org/abs/2406.04744) for details about the dataset construction.
28
 
29
+ CRAG (Comprehensive RAG Benchmark) is a rich and comprehensive factual question answering benchmark designed to advance research in RAG. The public version of the dataset includes:
30
+ - 2706 Question-Answer pairs
31
+ - 5 domains: Finance, Sports, Music, Movie, and Open domain
32
+ - 8 types of questions (see image below): simple, simple with condition, set, comparison, aggregation, multi-hop, post-processing heavy, and false premise
33
+
34
+ ![](figs/CRAG_table_2.png)
35
+
36
  The datasets `crag_task_1_and_2_dev_v4_subsample_*.json.bz2` have been created from the dataset [crag_task_1_and_2_dev_v4.jsonl.bz2](https://github.com/facebookresearch/CRAG/raw/refs/heads/main/data/crag_task_1_and_2_dev_v4.jsonl.bz2?download=) available on CRAG's GitHub repository.
37
+ For an easier handling and download of the dataset, we have used the script [crag_to_subsamples.py](https://huggingface.co/datasets/Quivr/crag/blob/main/crag_to_subsamples.py) to split the 2706 rows of the original file in 5 subsamples, following the procedure below:
38
  1. We have created a new label `answer_type`, classifying the answers in 3 categories:
39
  - `invalid` for any answer == "invalid question"
40
  - `no_answer` for any answer == "i don't know"
figs/CRAG_table_2.png ADDED

Git LFS Details

  • SHA256: d214e948d039ba38c4a619907f4218fae5fdc07b8bfd22cdf845f0d81b75bd43
  • Pointer size: 131 Bytes
  • Size of remote file: 102 kB