Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
Files changed (1) hide show
  1. README.md +2 -7
README.md CHANGED
@@ -42,18 +42,13 @@ dataset_info:
42
 
43
  This is the repo for the paper [SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval](https://arxiv.org/abs/2401.13478).
44
 
45
- <div align="center">
46
- <img src=./imgs/Framework.png width=80% />
47
- </div>
48
-
49
 
50
  In this paper, we propose a novel SciMMIR benchmark and a corresponding dataset designed to address the gap in evaluating multi-modal information retrieval (MMIR) models in the scientific domain.
51
 
52
  It is worth mentioning that we define a data hierarchical architecture of "Two subsets, Five subcategories" and use human-created keywords to classify the data (as shown in the table below).
53
 
54
- <div align="center">
55
- <img src=./imgs/data_architecture.png width=50% />
56
- </div>
57
 
58
 
59
  As shown in the table below, we conducted extensive baselines (both fine-tuning and zero-shot) within various subsets and subcategories.
 
42
 
43
  This is the repo for the paper [SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval](https://arxiv.org/abs/2401.13478).
44
 
45
+ ![main_result](./imgs/Framework.png)
 
 
 
46
 
47
  In this paper, we propose a novel SciMMIR benchmark and a corresponding dataset designed to address the gap in evaluating multi-modal information retrieval (MMIR) models in the scientific domain.
48
 
49
  It is worth mentioning that we define a data hierarchical architecture of "Two subsets, Five subcategories" and use human-created keywords to classify the data (as shown in the table below).
50
 
51
+ ![main_result](./imgs/data_architecture.png)
 
 
52
 
53
 
54
  As shown in the table below, we conducted extensive baselines (both fine-tuning and zero-shot) within various subsets and subcategories.