Datasets:
hezheqi
commited on
Commit
•
88e9191
1
Parent(s):
a6ed8f3
Add validation dataset
Browse files- .gitattributes +2 -0
- .gitignore +1 -0
- README.md +99 -3
- assets/example.png +3 -0
- val/biology.jsonl +0 -0
- val/chemistry.jsonl +0 -0
- val/geography.jsonl +0 -0
- val/history.jsonl +0 -0
- val/images.tar +3 -0
- val/math.jsonl +0 -0
- val/physics.jsonl +0 -0
- val/politics.jsonl +0 -0
.gitattributes
CHANGED
@@ -53,3 +53,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
56 |
+
val/images.tar filter=lfs diff=lfs merge=lfs -text
|
57 |
+
assets/example.png filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
images
|
README.md
CHANGED
@@ -1,3 +1,99 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CMMU
|
2 |
+
[**📖 Paper**](https://arxiv.org/) | [**🤗 Dataset**](https://huggingface.co/datasets) | [**GitHub**](https://github.com/FlagOpen/CMMU)
|
3 |
+
|
4 |
+
This repo contains the evaluation code for the paper [**CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning**](https://arxiv.org/) .
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
CMMU is a novel multi-modal benchmark designed to evaluate domain-specific knowledge across seven foundational subjects: math, biology, physics, chemistry, geography, politics, and history. It comprises 3603 questions, incorporating text and images, drawn from a range of Chinese exams. Spanning primary to high school levels, CMMU offers a thorough evaluation of model capabilities across different educational stages.
|
8 |
+
![](assets/example.png)
|
9 |
+
|
10 |
+
## Evaluation Results
|
11 |
+
We currently evaluated 10 models on CMMU. The results are shown in the following table.
|
12 |
+
|
13 |
+
| Model | Val Avg. | Test Avg. |
|
14 |
+
|----------------------------|----------|-----------|
|
15 |
+
| InstructBLIP-13b | 0.39 | 0.48 |
|
16 |
+
| CogVLM-7b | 5.55 | 4.9 |
|
17 |
+
| ShareGPT4V-7b | 7.95 | 7.63 |
|
18 |
+
| mPLUG-Owl2-7b | 8.69 | 8.58 |
|
19 |
+
| LLava-1.5-13b | 11.36 | 11.96 |
|
20 |
+
| Qwen-VL-Chat-7b | 11.71 | 12.14 |
|
21 |
+
| Intern-XComposer-7b | 18.65 | 19.07 |
|
22 |
+
| Gemini-Pro | 21.58 | 22.5 |
|
23 |
+
| Qwen-VL-Plus | 26.77 | 26.9 |
|
24 |
+
| GPT-4V | 30.19 | 30.91 |
|
25 |
+
|
26 |
+
## How to use
|
27 |
+
|
28 |
+
### Load dataset
|
29 |
+
```python
|
30 |
+
from eval.cmmu_dataset import CmmuDataset
|
31 |
+
# CmmuDataset will load *.jsonl files in data_root
|
32 |
+
dataset = CmmuDataset(data_root=your_path_to_cmmu_dataset)
|
33 |
+
```
|
34 |
+
|
35 |
+
**About fill-in-the-blank questions**
|
36 |
+
|
37 |
+
For fill-in-the-blank questions, `CmmuDataset` will generate new questions by `sub_question`, for example:
|
38 |
+
|
39 |
+
The original question is:
|
40 |
+
```python
|
41 |
+
{
|
42 |
+
"type": "fill-in-the-blank",
|
43 |
+
"question_info": "question",
|
44 |
+
"id": "subject_1234",
|
45 |
+
"sub_questions": ["sub_question_0", "sub_question_1"],
|
46 |
+
"answer": ["answer_0", "answer_1"]
|
47 |
+
}
|
48 |
+
```
|
49 |
+
Converted questions are:
|
50 |
+
```python
|
51 |
+
[
|
52 |
+
{
|
53 |
+
"type": "fill-in-the-blank",
|
54 |
+
"question_info": "question" + "sub_question_0",
|
55 |
+
"id": "subject_1234-0",
|
56 |
+
"answer": "answer_0"
|
57 |
+
},
|
58 |
+
{
|
59 |
+
"type": "fill-in-the-blank",
|
60 |
+
"question_info": "question" + "sub_question_1",
|
61 |
+
"id": "subject_1234-1",
|
62 |
+
"answer": "answer_1"
|
63 |
+
}
|
64 |
+
]
|
65 |
+
```
|
66 |
+
|
67 |
+
**About ShiftCheck**
|
68 |
+
The parameter `shift_check` is `True` by default, you can get more information about `shift_check` in our technical report.
|
69 |
+
|
70 |
+
`CmmuDataset` will generate k new questions by `shift_check`, their ids are `{original_id}-k`.
|
71 |
+
|
72 |
+
|
73 |
+
## Evaluate
|
74 |
+
|
75 |
+
The output format should be a list of json dictionaries, the required key is as follows:
|
76 |
+
```python
|
77 |
+
{
|
78 |
+
"question_id": "question id",
|
79 |
+
"answer": "answer"
|
80 |
+
}
|
81 |
+
```
|
82 |
+
Current code call gpt4 API by `AzureOpenAI`, maybe you need to modify `eval/chat_llm.py` to create your own client, and before run evaluation, you need to set environment variables like `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT`.
|
83 |
+
|
84 |
+
Run
|
85 |
+
```shell
|
86 |
+
python eval/evaluate.py --result your_pred_file --data_root your_path_to_cmmu_dataset
|
87 |
+
```
|
88 |
+
|
89 |
+
**NOTE** We evaluate fill-in-the-blank questions using GPT-4 by default. If you do not have access to GPT-4, you can attempt to use a rule-based method to fill in the blanks. However, be aware that the results might differ from the official ones.
|
90 |
+
```shell
|
91 |
+
python eval/evaluate.py --result your_pred_file --data_root your_path_to_cmmu_dataset --gpt none
|
92 |
+
```
|
93 |
+
|
94 |
+
To evaluate specific type of questions, you can use `--qtype` parameter, for example:
|
95 |
+
```shell
|
96 |
+
python eval/evaluate.py --result example/gpt4v_results_val.json --data_root your_path_to_cmmu_dataset --qtype fbq mrq
|
97 |
+
```
|
98 |
+
|
99 |
+
## Citation
|
assets/example.png
ADDED
Git LFS Details
|
val/biology.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val/chemistry.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val/geography.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val/history.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val/images.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:516a9f6fa71e34957850d084d7fc02569836df310124b81380eab1ffa1aba34c
|
3 |
+
size 40857600
|
val/math.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val/physics.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val/politics.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|