luodian commited on
Commit
f4b3fa4
1 Parent(s): 8e79c43

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - lmms-lab/LLaVA-OneVision-Data
4
+ language:
5
+ - en
6
+ - zh
7
+ library_name: transformers
8
+ license: apache-2.0
9
+ metrics:
10
+ - accuracy
11
+ tags:
12
+ - multimodal
13
+ ---
14
+
15
+ # LLaVA-OneVision
16
+
17
+ ![banner](https://i.postimg.cc/pL17YtG4/WX20240508-220230-2x.png)
18
+
19
+ Play with the model on the [LLaVA OneVision Chat](https://llava-onevision.lmms-lab.com/).
20
+
21
+ ## Table of Contents
22
+
23
+ 1. [Model Summary](##model-summary)
24
+ 2. [Use](##use)
25
+ 3. [Limitations](##limitations)
26
+ 4. [Training](##training)
27
+ 5. [License](##license)
28
+ 6. [Citation](##citation)
29
+
30
+ ## Model Summary
31
+
32
+ The LLaVA-OneVision models are 0.5/7/72B parameter models trained on [LLaVA-OneVision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), based on Qwen2 language model with a context window of 32K tokens.
33
+
34
+ The model with `-chat` postfix denotes the model with iterative DPO training with human preference and suitable for chat usage. Our research shows we can improve the model's chat ability meanwhile maintain other instruction-following abilities through our iterative DPO training recipe.
35
+
36
+ - **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
37
+ - **Project Website:** [llava-onevision.lmms-lab.com](llava-onevision.lmms-lab.com)
38
+ - **Paper:** [LLaVA-OneVision](arxiv.org/abs/2408.03326)
39
+ - **Point of Contact:** [Bo Li](mailto:[email protected])
40
+ - **Languages:** English, Chinese
41
+
42
+ ## Use
43
+
44
+ ### Intended use
45
+
46
+ The model was trained on [LLaVA-OneVision Dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) and have the ability to interact with images, multi-image and videos.
47
+
48
+ **Feel free to share your generations in the Community tab!**
49
+
50
+ ### Generation
51
+
52
+ ```python
53
+ # pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
54
+ from llava.model.builder import load_pretrained_model
55
+ from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
56
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
57
+ from llava.conversation import conv_templates, SeparatorStyle
58
+
59
+ from PIL import Image
60
+ import requests
61
+ import copy
62
+ import torch
63
+
64
+ import sys
65
+ import warnings
66
+
67
+ warnings.filterwarnings("ignore")
68
+ pretrained = "lmms-lab/llava-onevision-qwen2-0.5b-si"
69
+ model_name = "llava_qwen"
70
+ device = "cuda"
71
+ device_map = "auto"
72
+ tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args
73
+
74
+ model.eval()
75
+
76
+ url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
77
+ image = Image.open(requests.get(url, stream=True).raw)
78
+ image_tensor = process_images([image], image_processor, model.config)
79
+ image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]
80
+
81
+ conv_template = "qwen_1_5" # Make sure you use correct chat template for different models
82
+ question = DEFAULT_IMAGE_TOKEN + "\nWhat is shown in this image?"
83
+ conv = copy.deepcopy(conv_templates[conv_template])
84
+ conv.append_message(conv.roles[0], question)
85
+ conv.append_message(conv.roles[1], None)
86
+ prompt_question = conv.get_prompt()
87
+
88
+ input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
89
+ image_sizes = [image.size]
90
+
91
+
92
+ cont = model.generate(
93
+ input_ids,
94
+ images=image_tensor,
95
+ image_sizes=image_sizes,
96
+ do_sample=False,
97
+ temperature=0,
98
+ max_new_tokens=4096,
99
+ )
100
+ text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
101
+ print(text_outputs)
102
+ ```
103
+
104
+ # Training
105
+
106
+ ## Model
107
+
108
+ - **Architecture:** SO400M + Qwen2
109
+ - **Pretraining Stage:** LCS-558K, 1 epoch, projector
110
+ - **Mid Stage:** A mixture of 4.7M high-quality synthetic data, 1 epoch, full model
111
+ - **Final-Image Stage:** A mixture of 3.6M single-image data, 1 epoch, full model
112
+ - **OneVision Stage:** A mixture of 1.6M single-image/multi-image/video data, 1 epoch, full model
113
+ - **Precision:** bfloat16
114
+
115
+ ## Hardware & Software
116
+
117
+ - **GPUs:** 256 \* Nvidia Tesla A100 (for whole model series training)
118
+ - **Orchestration:** [Huggingface Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
119
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
120
+
121
+ # Citation
122
+
123
+ ```
124
+ @article{li2024llavaonevision,
125
+ title={LLaVA-OneVision},
126
+ }
127
+ ```