Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
tianyu-z commited on
Commit
4dfe844
1 Parent(s): 9aba9e3
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -64,6 +64,32 @@ We found that OCR and text-based processing become ineffective in VCR as accurat
64
  - **Paper:** [VCR: Visual Caption Restoration](https://arxiv.org/abs/2406.06462)
65
  - **Point of Contact:** [Tianyu Zhang](mailto:[email protected])
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  # Model Evaluation
68
 
69
  ## Method 1 (recommended): use the evaluation script
 
64
  - **Paper:** [VCR: Visual Caption Restoration](https://arxiv.org/abs/2406.06462)
65
  - **Point of Contact:** [Tianyu Zhang](mailto:[email protected])
66
 
67
+ # Benchmark
68
+ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in closed source and open source are highlighted in **bold**. Closed source models are evaluated based on 500 test samples, while open source models are evaluated based on 5000 test samples.
69
+ | Model | Size (unknown for closed source) | En Easy EM | En Easy Jaccard | En Hard EM | En Hard Jaccard | Zh Easy EM | Zh Easy Jaccard | Zh Hard EM | Zh Hard Jaccard |
70
+ |---|---|---|---|---|---|---|---|---|---|
71
+ | Claude 3 Opus | - | 62.0 | 77.67 | 37.8 | 57.68 | 0.9 | 11.5 | 0.3 | 9.22 |
72
+ | Claude 3.5 Sonnet | - | 63.85 | 74.65 | 41.74 | 56.15 | 1.0 | 7.54 | 0.2 | 4.0 |
73
+ | GPT-4 Turbo | - | 78.74 | 88.54 | 45.15 | 65.72 | 0.2 | 8.42 | 0.0 | 8.58 |
74
+ | GPT-4V | - | 52.04 | 65.36 | 25.83 | 44.63 | - | - | - | - |
75
+ | GPT-4o | - | **91.55** | **96.44** | **73.2** | **86.17** | **14.87** | **39.05** | **2.2** | **22.72** |
76
+ | Gemini 1.5 Pro | - | 62.73 | 77.71 | 28.07 | 51.9 | 1.1 | 11.1 | 0.7 | 11.82 |
77
+ | Qwen-VL-Max | - | 76.8 | 85.71 | 41.65 | 61.18 | 6.34 | 13.45 | 0.89 | 5.4 |
78
+ | Reka Core | - | 66.46 | 84.23 | 6.71 | 25.84 | 0.0 | 3.43 | 0.0 | 3.35 |
79
+ | CogVLM2 | 19B | **83.25** | **89.75** | **37.98** | **59.99** | - | - | - | - |
80
+ | CogVLM2-Chinese | 19B | - | - | - | - | **33.24** | **57.57** | **1.34** | **17.35** |
81
+ | DeepSeek-VL | 1.3B | 23.04 | 46.84 | 0.16 | 11.89 | 0.0 | 6.56 | 0.0 | 6.46 |
82
+ | DeepSeek-VL | 7B | 38.01 | 60.02 | 1.0 | 15.9 | 0.0 | 4.08 | 0.0 | 5.11 |
83
+ | DocOwl-1.5-Omni | 8B | 0.84 | 13.34 | 0.04 | 7.76 | 0.0 | 1.14 | 0.0 | 1.37 |
84
+ | Idefics2 | 8B | 15.75 | 31.97 | 0.65 | 9.93 | - | - | - | - |
85
+ | InternLM-XComposer2-VL | 7B | 46.64 | 70.99 | 0.7 | 12.51 | 0.27 | 12.32 | 0.07 | 8.97 |
86
+ | InternVL-V1.5 | 25.5B | 14.65 | 51.42 | 1.99 | 16.73 | 4.78 | 26.43 | 0.03 | 8.46 |
87
+ | MiniCPM-V2.5 | 8B | 31.81 | 53.24 | 1.41 | 11.94 | 4.1 | 18.03 | 0.09 | 7.39 |
88
+ | Monkey | 7B | 50.66 | 67.6 | 1.96 | 14.02 | 0.62 | 8.34 | 0.12 | 6.36 |
89
+ | Qwen-VL | 7B | 49.71 | 69.94 | 2.0 | 15.04 | 0.04 | 1.5 | 0.01 | 1.17 |
90
+ | Yi-VL | 34B | 0.82 | 5.59 | 0.07 | 4.31 | 0.0 | 4.44 | 0.0 | 4.12 |
91
+ | Yi-VL | 6B | 0.75 | 5.54 | 0.06 | 4.46 | 0.00 | 4.37 | 0.00 | 4.0 |
92
+
93
  # Model Evaluation
94
 
95
  ## Method 1 (recommended): use the evaluation script