JUNJIE99 commited on
Commit
6007438
Β·
verified Β·
1 Parent(s): c25733e

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. LICENSE +22 -0
  2. README.md +120 -3
LICENSE ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2024 JUNJIE99
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
README.md CHANGED
@@ -1,3 +1,120 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://arxiv.org/abs/2412.14475">
5
+ <img alt="Build" src="http://img.shields.io/badge/cs.CV-arXiv%3A2412.14475-B31B1B.svg">
6
+ </a>
7
+ <a href="https://github.com/VectorSpaceLab/MegaPairs">
8
+ <img alt="Build" src="https://img.shields.io/badge/Github-Code-blue">
9
+ </a>
10
+ <a href="https://huggingface.co/datasets/JUNJIE99/MegaPairs">
11
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Datasets-MegaPairs-yellow">
12
+ </p>
13
+
14
+ <p align="center">
15
+ </a>
16
+ <a href="https://huggingface.co/JUNJIE99/MMRet-base">
17
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_base-yellow">
18
+ </a>
19
+ <a href="https://huggingface.co/JUNJIE99/MMRet-large">
20
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_large-yellow">
21
+ </a>
22
+ <a href="https://huggingface.co/JUNJIE99/MMRet-MLLM">
23
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_MLLM-yellow">
24
+ </a>
25
+ </p>
26
+
27
+ ## News
28
+ ```2024-12-27``` πŸš€πŸš€ MMRet-CLIP models are released in Huggingface: [MMRet-base](https://huggingface.co/JUNJIE99/MMRet-base) and [MMRet-large](https://huggingface.co/JUNJIE99/MMRet-large).
29
+
30
+ ```2024-12-19``` πŸŽ‰πŸŽ‰ Release our paper: [MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval](https://arxiv.org/pdf/2412.14475).
31
+
32
+ ## Release Plan
33
+ - [x] Paper
34
+ - [x] MMRet-base and MMRet-large models
35
+ - [ ] MMRet-MLLM model
36
+ - [ ] MegaPairs Dataset
37
+ - [ ] Evaluation code
38
+ - [ ] Fine-tuning code
39
+
40
+
41
+ ## Introduction
42
+ In this project, we introduce **MegaPairs**, a novel data synthesis method that leverages open-domain images to create *heterogeneous KNN triplets* for universal multimodal retrieval. Our MegaPairs dataset contains over 26 million triplets, and we have trained a series of multimodal retrieval models, **MMRets**, including MMRet-CLIP (base and large) and MMRet-MLLM.
43
+
44
+ MMRets achieve state-of-the-art performance on four popular zero-shot composed image retrieval benchmarks and the massive multimodal embedding benchmark (MMEB). Extensive experiments demonstrate the ***efficiency, scalability, and generalization*** features of MegaPairs. Please refer to our [paper](https://arxiv.org/abs/2412.14475) for more details.
45
+
46
+ ## Model Usage
47
+
48
+ ### 1. MMRet-CLIP Models
49
+ You can easily use MMRet-CLIP models based on ```transformers```
50
+ ```python
51
+ import torch
52
+ from transformers import AutoModel
53
+
54
+ MODEL_NAME = "JUNJIE99/MMRet-base" # or "JUNJIE99/MMRet-large"
55
+
56
+ model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True) # You must set trust_remote_code=True
57
+ model.set_processor(MODEL_NAME)
58
+ model.eval()
59
+
60
+ with torch.no_grad():
61
+ query = model.encode(
62
+ images = "./assets/cir_query.png",
63
+ text = "Make the background dark, as if the camera has taken the photo at night"
64
+ )
65
+
66
+ candidates = model.encode(
67
+ images = ["./assets/cir_candi_1.png", "./assets/cir_candi_2.png"]
68
+ )
69
+
70
+ scores = query @ candidates.T
71
+ print(scores)
72
+ ```
73
+
74
+
75
+
76
+
77
+ ### 2. MMRet-MLLM Models
78
+ ```Will be released soon.```
79
+
80
+ ## Model Performance
81
+ ### Zero-Shot Composed Image Retrieval
82
+
83
+ MMRet sets a new performance benchmark in zero-shot composed image retrieval tasks. On the CIRCO benchmark, our MMRet-base model, with only 149 million parameters, surpasses all previous models, including those with 50 times more parameters. Additionally, MMRet-MLLM achieves an 8.1% improvement over the previous state-of-the-art model.
84
+
85
+ <img src="./assets/res-zs-cir.png" width="800">
86
+
87
+ ### Zero-Shot Performance on MMEB
88
+
89
+ MMRet-MLLM achieves state-of-the-art zero-shot performance on the Massive Multimodal Embedding Benchmark (MMEB), despite being trained only on the ImageText-to-Image paradigm. This demonstrates the excellent generalization capability of MegaPairs for multimodal embedding.
90
+
91
+ <img src="./assets/res-zs-mmeb.png" width="800">
92
+
93
+ ### Fine-Tuning Performance on MMEB
94
+
95
+ After fine-tuning on downstream tasks, MMRet-MLLM maintains its leading performance. Notably, it surpasses the previous state-of-the-art by 7.1% on the MMEB out-of-distribution (OOD) set. These results demonstrate the robust generalization capability of MMRet-MLLM and highlight the potential of MegaPairs as foundational training data for universal multimodal embedding.
96
+
97
+ <img src="./assets/res-ft-mmeb.png" width="800">
98
+
99
+ ### Performance Scaling
100
+ MegaPairs showcases **scalability**: MMRet-base improves as training data increases. It also demonstrates **efficiency**: with just 0.5M training samples, MMRet-base significantly outperforms MagicLens, which uses the same CLIP-base backbone and was trained on 36.7M samples.
101
+
102
+ <img src="./assets/res-scaling.png" width="800">
103
+
104
+
105
+ ## License
106
+ The annotations for MegaPairs and the MMRet models are released under the [MIT License](LICENSE). The images in MegaPairs originate from the [Recap-Datacomp](https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B), which is released under the CC BY 4.0 license.
107
+
108
+
109
+
110
+ ## Citation
111
+ If you find this repository useful, please consider giving a star ⭐ and citation
112
+
113
+ ```
114
+ @article{zhou2024megapairs,
115
+ title={MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval},
116
+ author={Zhou, Junjie and Liu, Zheng and Liu, Ze and Xiao, Shitao and Wang, Yueze and Zhao, Bo and Zhang, Chen Jason and Lian, Defu and Xiong, Yongping},
117
+ journal={arXiv preprint arXiv:2412.14475},
118
+ year={2024}
119
+ }
120
+ ```