ColPali
Safetensors
English
idefics3
colsmolvlm
vidore-experimental
vidore
QuentinJG commited on
Commit
30cf1fc
·
verified ·
1 Parent(s): f7cc377

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: colpali
4
+ base_model: HuggingFaceTB/SmolVLM-256M-Base
5
+ language:
6
+ - en
7
+ tags:
8
+ - colsmolvlm
9
+ - vidore-experimental
10
+ - vidore
11
+ ---
12
+ # ColSmolVLM-256M-Base: Visual Retriever based on SmolVLM-256M-Base with ColBERT strategy
13
+
14
+ ### This is a version trained with batch_size 32 for 3 epochs
15
+
16
+ ColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
17
+ It is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
18
+ It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
19
+
20
+ <p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
21
+
22
+ ## Version specificity
23
+
24
+ This version is trained with the commit b983e40 of the Colpali repository. (main branch from the repo)
25
+
26
+ Data is the same as the ColPali data described in the paper.
27
+
28
+
29
+ ## Model Training
30
+
31
+ ### Dataset
32
+ Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
33
+ Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
34
+ A validation set is created with 2% of the samples to tune hyperparameters.
35
+
36
+ *Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
37
+
38
+ ### Parameters
39
+
40
+ Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
41
+ with `alpha=32` and `r=32` on the transformer layers from the language model,
42
+ as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
43
+ We train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.
44
+
45
+ ## Usage
46
+
47
+ This should not be used as it is the base model, used only for initiliasation of the linear head weights of the model.
48
+
49
+ ## Limitations
50
+
51
+ - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
52
+ - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
53
+
54
+ ## License
55
+
56
+ ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
57
+
58
+ ## Contact
59
+
60
+ - Manuel Faysse: [email protected]
61
+ - Hugues Sibille: [email protected]
62
+ - Tony Wu: [email protected]
63
+
64
+ ## Citation
65
+
66
+ If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
67
+
68
+ ```bibtex
69
+ @misc{faysse2024colpaliefficientdocumentretrieval,
70
+ title={ColPali: Efficient Document Retrieval with Vision Language Models},
71
+ author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
72
+ year={2024},
73
+ eprint={2407.01449},
74
+ archivePrefix={arXiv},
75
+ primaryClass={cs.IR},
76
+ url={https://arxiv.org/abs/2407.01449},
77
+ }
78
+ ```