update
Browse files
README.md
CHANGED
@@ -1,3 +1,101 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
datasets:
|
4 |
+
- mlfoundations/datacomp_1b
|
5 |
+
pipeline_tag: feature-extraction
|
6 |
---
|
7 |
+
|
8 |
+
# Model card for ViTamin-S-LTT
|
9 |
+
|
10 |
+
Official huggingface models of **ViTamin**, from the following CVPR 2024 paper:
|
11 |
+
|
12 |
+
[ViTamin: Design Scalable Vision Models in the Vision-language Era](https://arxiv.org/pdf/2404.02132.pdf).\
|
13 |
+
✨  [Jieneng Chen](https://beckschen.github.io), [Qihang Yu](https://yucornetto.github.io/), [Xiaohui Shen](https://xiaohuishen.github.io/), [Alan Yuille](https://www.cs.jhu.edu/~ayuille/) and [Liang-Chieh Chen](http://liangchiehchen.com/)\
|
14 |
+
🏠  Johns Hopkins University, Bytedance
|
15 |
+
|
16 |
+
|
17 |
+
Load from HuggingFace with transformers.AutoModel:
|
18 |
+
```python
|
19 |
+
import torch
|
20 |
+
import open_clip
|
21 |
+
from PIL import Image
|
22 |
+
from transformers import AutoModel, CLIPImageProcessor
|
23 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
24 |
+
|
25 |
+
# obtained 69.1% zero-shot ImageNet score
|
26 |
+
model = AutoModel.from_pretrained(
|
27 |
+
'jienengchen/ViTamin-B',
|
28 |
+
trust_remote_code=True).to(device).eval()
|
29 |
+
|
30 |
+
image = Image.open('./image.png').convert('RGB')
|
31 |
+
image_processor = CLIPImageProcessor.from_pretrained('jienengchen/ViTamin-B')
|
32 |
+
|
33 |
+
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
|
34 |
+
pixel_values = pixel_values.to(torch.bfloat16).cuda()
|
35 |
+
|
36 |
+
tokenizer = open_clip.get_tokenizer('hf-hub:laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K')
|
37 |
+
text = tokenizer(["a photo of vitamin", "a dog", "a cat"]).to(device)
|
38 |
+
|
39 |
+
with torch.no_grad(), torch.cuda.amp.autocast():
|
40 |
+
image_features, text_features, logit_scale = model(pixel_values, text)
|
41 |
+
text_probs = (100.0 * image_features @ text_features.to(torch.float).T).softmax(dim=-1)
|
42 |
+
|
43 |
+
print("Label probs:", text_probs)
|
44 |
+
```
|
45 |
+
|
46 |
+
## Main Results with CLIP Pre-training on DataComp-1B
|
47 |
+
|
48 |
+
| image encoder | image size | num patches | text encoder depth/width | seen samples (B) | trainable params Image+Text (M) | MACs Image+Text (G) | ImageNet Acc. | avg. 38 datasets | ImageNet dist. shift. | VTAB | retrieval |
|
49 |
+
|---------------|------------|-------------|--------------------------|-------------------|---------------------------------|----------------------|---------------|------------------|-----------------------|------|-----------|
|
50 |
+
| ViTamin-S | 224 | 196 | 12/384 | 1.28 | 22.0+40.4 | 5.50+1.64 | 62.2 | 53.2 | 51.3 | 51.7 | 50.0 |
|
51 |
+
| ViTamin-S-LTT | 224 | 196 | 12/384 | 1.28 | 22.0+40.4 | 5.50+1.64 | 63.4 | 54.6 | 51.6 | 54.9 | 52.9 |
|
52 |
+
| ViTamin-B | 224 | 196 | 12/512 | 1.28 | 87.5+63.4 | 21.8+2.9 | 68.9 | 57.7 | 58.3 | 56.4 | 54.1 |
|
53 |
+
| ViTamin-B-LTT | 224 | 196 | 12/512 | 1.28 | 87.5+63.4 | 21.8+2.9 | 70.8 | 59.4 | 59.3 | 56.6 | 59.4 |
|
54 |
+
| ViTamin-L | 224 | 196 | 12/768 | 12.8 | 333.3+123.7 | 72.6+6.6 | 80.8 | 66.7 | 69.8 | 65.3 | 60.3 |
|
55 |
+
| ViTamin-L | 256 | 256 | 12/768 | 12.8+0.2 | 333.4+123.7 | 94.8+6.6 | 81.2 | 67.0 | 71.1 | 65.3 | 61.2 |
|
56 |
+
| ViTamin-L | 336 | 441 | 12/768 | 12.8+0.2 | 333.6+123.7 | 163.4+6.6 | 81.6 | 67.0 | 72.1 | 64.4 | 61.6 |
|
57 |
+
| ViTamin-L | 384 | 576 | 12/768 | 12.8+0.2 | 333.7+123.7 | 213.4+6.6 | 81.8 | 67.2 | 72.4 | 64.7 | 61.8 |
|
58 |
+
| ViTamin-L2 | 224 | 196 | 24/1024 | 12.8 | 333.6+354.0 | 72.6+23.3 | 80.9 | 66.4 | 70.6 | 63.4 | 61.5 |
|
59 |
+
| ViTamin-L2 | 256 | 256 | 24/1024 | 12.8+0.5 | 333.6+354.0 | 94.8+23.3 | 81.5 | 67.4 | 71.9 | 64.1 | 63.1 |
|
60 |
+
| ViTamin-L2 | 336 | 441 | 24/1024 | 12.8+0.5 | 333.8+354.0 | 163.4+23.3 | 81.8 | 67.8 | 73.0 | 64.5 | 63.6 |
|
61 |
+
| ViTamin-L2 | 384 | 576 | 24/1024 | 12.8+0.5 | 334.0+354.0 | 213.4+23.3 | 82.1 | 68.1 | 73.4 | 64.8 | 63.7 |
|
62 |
+
| ViTamin-XL | 256 | 256 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 125.3+33.1 | 82.1 | 67.6 | 72.3 | 65.4 | 62.7 |
|
63 |
+
| ViTamin-XL | 384 | 576 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 281.9+33.1 | 82.6 | 68.1 | 73.6 | 65.6 | 63.8 |
|
64 |
+
| ViTamin-XL | 256 | 256 | 27/1152 | 40 | 436.1+488.7 | 125.3+33.1 | 82.3 | 67.5 | 72.8 | 64.0 | 62.1 |
|
65 |
+
| ViTamin-XL | 336 | 441 | 27/1152 | 40+1 | 436.1+488.7 | 215.9+33.1 | 82.7 | 68.0 | 73.9 | 64.1 | 62.6 |
|
66 |
+
| ViTamin-XL | 384 | 576 | 27/1152 | 40+1 | 436.1+488.7 | 281.9+33.1 | 82.9 | 68.1 | 74.1 | 64.0 | 62.5 |
|
67 |
+
|
68 |
+
## Main Results on Downstream tasks
|
69 |
+
**Open-Vocab Detection**
|
70 |
+
| image encoder | detector | OV-COCO (AP<sub>50</sub><sup>novel</sup>) | OV-LVIS (AP<sub>r</sub>) |
|
71 |
+
|---------------|----------|---------------------------------------|-----------------------|
|
72 |
+
| ViT-L/14 | Sliding F-ViT | 36.1 | 32.5 |
|
73 |
+
| ViTamin-L | Sliding F-ViT | 37.5 | 35.6 |
|
74 |
+
|
75 |
+
**Open-Vocab Segmentation**
|
76 |
+
|
77 |
+
| image encoder | segmentor | ADE | Cityscapes | MV | A-150 | A-847 | PC-459 | PC-59 | PAS-21 |
|
78 |
+
|---------------|-------------|----------------|--------------|------|-------|-------|--------|-------|--------------------|
|
79 |
+
| ViT-L/14 | Sliding FC-CLIP | 24.6 | 40.7 | 16.5 | 31.8 | 14.3 | 18.3 | 55.1 | 81.5 |
|
80 |
+
| ViTamin-L | Sliding FC-CLIP | 27.3 | 44.0 | 18.2 | 35.6 | 16.1 | 20.4 | 58.4 | 83.4 |
|
81 |
+
|
82 |
+
Note: Panoptic dataset (ADE, CityScapes, MV) are with the metric of PQ. Semantic dataset (A-150, A-847, PC-459, PC-59, PAS-21) are with the metric of mIoU.
|
83 |
+
|
84 |
+
**Large Multi-modal Models**
|
85 |
+
|
86 |
+
| image encoder | image size | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-B-CN | SEED | LLaVA-Wild | MM-Vet |
|
87 |
+
|---------------|----------|-------|------|--------|------|-------|------|------|----------|---------|------|------------|--------|
|
88 |
+
| ViTamin-L | 224 | 78.4 | 61.6 | 51.1 | 66.9 | 58.7 | 84.6 | 1421 | 65.4 | 58.4 | 57.7 | 64.5 | 33.6 |
|
89 |
+
| ViTamin-L | 384 | 78.9 | 61.6 | 55.4 | 67.6 | 59.8 | 85.5 | 1447 | 64.5 | 58.3 | 57.9 | 66.1 | 33.6 |
|
90 |
+
|
91 |
+
|
92 |
+
## Citing ViTamin
|
93 |
+
|
94 |
+
```
|
95 |
+
@inproceedings{chen2024vitamin,
|
96 |
+
title={ViTamin: Design Scalable Vision Models in the Vision-language Era},
|
97 |
+
author={Chen, Jieneng and Yu, Qihang and Shen, Xiaohui and Yuille, ALan and Chen, Liang-Chieh},
|
98 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
99 |
+
year={2024}
|
100 |
+
}
|
101 |
+
```
|