Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
bys0318 commited on
Commit
1860609
Β·
verified Β·
1 Parent(s): 4776e4e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -3
README.md CHANGED
@@ -1,3 +1,80 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ - text-generation
5
+ - conversational
6
+ - text-classification
7
+ language:
8
+ - en
9
+ tags:
10
+ - Long Context
11
+ size_categories:
12
+ - 1K<n<10K
13
+ ---
14
+
15
+ # πŸ“š LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks
16
+
17
+ LongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.
18
+
19
+ To elaborate, LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repo understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of **enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2**.
20
+
21
+ **πŸ” With LongBench v2, we are eager to find out how scaling inference-time compute will affect deep understanding and reasoning in long-context scenarios. View our πŸ† leaderboard [here](https://longbench2.github.io/#leaderboard) (updating).**
22
+
23
+ 🌐 Project Page: https://longbench2.github.io
24
+ πŸ’» Github Repo: https://github.com/THUDM/LongBench
25
+ πŸ“š Arxiv Paper: https://arxiv.org/pdf/2308.14508.pdf
26
+
27
+ # How to use it?
28
+
29
+ #### Loading Data
30
+
31
+ You can download and load the **LongBench v2** data through the Hugging Face datasets ([πŸ€— HF Repo](https://huggingface.co/datasets/THUDM/LongBench-v2)):
32
+ ```python
33
+ from datasets import load_dataset
34
+ dataset = load_dataset('THUDM/LongBench-v2', split='train')
35
+ ```
36
+ Alternatively, you can download the file from [this link](https://huggingface.co/datasets/THUDM/LongBench-v2/resolve/main/data.json) to load the data.
37
+
38
+ #### Data Format
39
+
40
+ All data in **LongBench v2** are standardized to the following format:
41
+
42
+ ```json
43
+ {
44
+ "_id": "Unique identifier for each piece of data",
45
+ "domain": "The primary domain category of the data",
46
+ "sub_domain": "The specific sub-domain category within the domain",
47
+ "difficulty": "The difficulty level of the task, either 'easy' or 'hard'",
48
+ "length": "The length category of the task, which can be 'short', 'medium', or 'long'",
49
+ "question": "The input/command for the task, usually short, such as questions in QA, queries in many-shot learning, etc",
50
+ "choice_A": "Option A", "choice_B": "Option B", "choice_C": "Option C", "choice_D": "Option D",
51
+ "answer": "The groundtruth answer, denoted as A, B, C, or D",
52
+ "context": "The long context required for the task, such as documents, books, code repositories, etc."
53
+ }
54
+ ```
55
+
56
+ #### Evaluation
57
+
58
+ This repository provides data download for LongBench v2. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
59
+
60
+ # Dataset Statistics
61
+
62
+ <div style="text-align: left;">
63
+ <img src="misc/length.png" width="600" />
64
+ </div>
65
+
66
+ <div style="text-align: left;">
67
+ <img src="misc/table.png" width="700" />
68
+ </div>
69
+
70
+ # Citation
71
+ ```
72
+ @misc{bai2023longbench,
73
+ title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
74
+ author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
75
+ year={2023},
76
+ eprint={2308.14508},
77
+ archivePrefix={arXiv},
78
+ primaryClass={cs.CL}
79
+ }
80
+ ```