File size: 9,938 Bytes
d4d1f38
 
02362f3
 
 
 
 
 
d4d1f38
02362f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
---
license: apache-2.0
tags:
- llava
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-text-to-text
---

## Model
llava-siglip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from [internlm2-1_8b](https://huggingface.co./internlm/internlm2-1_8b) and [siglip-so400m-patch14-384](https://huggingface.co./google/siglip-so400m-patch14-384) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct-150K](https://huggingface.co./datasets/liuhaotian/LLaVA-Instruct-150K) by [Xtuner](https://github.com/InternLM/xtuner). The pretraining phase took 5.5 hours on 4 Nvidia GTX 4090 GPU (see this [intermediate checkpoint](https://huggingface.co./StarCycle/llava-siglip-internlm2-1_8b-pretrain-v2)). The finetuning phase took 16 hours on 4 Nvidia GTX 4090 GPU.

The total size of the model is around 2.2B, which is suitable for embedded applications like robotics. This model performs better than [llava-siglip-internlm2-1_8b-v1](https://huggingface.co./StarCycle/llava-siglip-internlm2-1_8b-v1) because I use the base LLM, instead of the SFT version. 

I have not carefully tune the hyperparameters during training. If you have any idea to improve it, please open an issue or just send an email to [email protected]. You are welcomed!

## Example
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/AEw4i1rkIcUY74hFLhXLW.png)
Explain this photo in English and Chinese:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/TibbHJOeZeMkV3h2pinXk.png)

## Results
Model | MMBench Test (EN) | MMBench Dev (EN) | MMBench Test (CN) | MMBench Dev (CN) | CCBench Dev
------------- | ------------- | ------------- | ------------- | ------------- | -------------
LLaVA-v1.5-7B | 67.7 | 69.2 | 61.0 | 59.7 | 28.4
LLaVA-InternLM-7B | 69.0 | 68.5 | 66.7 | 63.8 | 37.3
LLaVA-InternLM2-7B | 73.3 | 74.6 | 71.7 | 72.0 | 42.5
Bunny-3B | 69.2 | 68.6 | - | - | -
MiniCPM-V | 64.1 | 67.9 | 62.6 | 65.3 | 41.4 
llava-clip-internlm2-1_8b-v1 | 63.3 | 63.1 | 63.6 | 61.7 | 35.3
llava-siglip-internlm2-1_8b-v1 | 65.7 | 63.5 | 64.5 | 62.9 | 36.3
llava-siglip-internlm2-1_8b-v2 | - | 67.1 | - | 64.1 | 37.3

The test service of MMBench wensite crashes during the special time...I will evaluate it again when the service is back to normal.

## Installation
```
# We need the newest version so clone from github
git clone https://github.com/huggingface/transformers/
git clone https://github.com/huggingface/peft
git clone https://github.com/InternLM/xtuner
```
Now please replace the files in transformers and xtuner with the source code files in modified_transformers and modified_xtuner. 
```
cp -r ./modified_transformers ./transformers
cp -r ./modified_xtuner ./xtuner
```

Then run
```
pip install -e ./transformers
pip install -e ./peft
pip install -e ./xtuner[deepspeed]
apt install git-lfs
```

## Chat
```
xtuner chat internlm/internlm2-1_8b \
--visual-encoder google/siglip-so400m-patch14-384 \
--llava ./lora_and_projectors \
--prompt-template internlm2_chat \
--image $IMAGE_PATH
```

## Common Errors
1. 
```
command error: 'libGL.so.1: cannot open shared object file: No such file or directory'!
```
You can solve it by
```
# For Ubuntu
sudo apt-get update
sudo apt-get install libgl1-mesa-glx

# For CentOS and Fedora
sudo yum install mesa-libGL
```

2.
```
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
        Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
```
You can solve it by reinstall numpy.

3.
```
ImportError: 
InternLM2Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
```
You just need
```
pip install protobuf
```
4.
To use tensorboard to visualize the training loss curve:
```
pip install future tensorboard 
```

5. If your training process is killed during data preprocessing, you can modify the `map_num_proc` in xtuner/xtuner/dataset
/huggingface.py
```
def process(dataset,
            do_dataset_tokenization=True,
            tokenizer=None,
            max_length=None,
            dataset_map_fn=None,
            template_map_fn=None,
            max_dataset_length=None,
            split='train',
            remove_unused_columns=False,
            rename_maps=[],
            shuffle_before_pack=True,
            pack_to_max_length=True,
            use_varlen_attn=False,
            input_ids_with_output=True,
            with_image_token=False,
            map_num_proc=32): # modify it to a smaller number, e.g., 4
```

6. If you fail to load the model, check whether you installed git-lfs and actually downloaded the model file.

## Data prepration
1. File structure

```
# . means the llava-dinov2-internlm2-7b-v1 folder you clone
./data/llava_data
β”œβ”€β”€ LLaVA-Pretrain
β”‚Β Β  β”œβ”€β”€ blip_laion_cc_sbu_558k.json
β”‚Β Β  β”œβ”€β”€ blip_laion_cc_sbu_558k_meta.json
β”‚Β Β  └── images
β”œβ”€β”€ LLaVA-Instruct-150K
β”‚Β Β  └── llava_v1_5_mix665k.json
└── llava_images
 Β Β  β”œβ”€β”€ coco
 Β Β  β”‚   └── train2017
 Β Β  β”œβ”€β”€ gqa
 Β Β  β”‚   └── images
 Β Β  β”œβ”€β”€ ocr_vqa
 Β Β  β”‚   └── images
 Β Β  β”œβ”€β”€ textvqa
 Β Β  β”‚   └── train_images
 Β Β  └── vg
 Β Β   Β Β  β”œβ”€β”€ VG_100K
 Β Β      └── VG_100K_2
```

2. Pretrain Data

LLaVA-Pretrain

```shell
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co./datasets/liuhaotian/LLaVA-Pretrain --depth=1
```

3. Finetune Data

3.1 Text data

    LLaVA-Instruct-150K

      ```shell
      # Make sure you have git-lfs installed (https://git-lfs.com)
      git lfs install
      git clone https://huggingface.co./datasets/liuhaotian/LLaVA-Instruct-150K --depth=1
      ```

3.2 Image data

   3.2.1 COCO (coco): [train2017](http://images.cocodataset.org/zips/train2017.zip)

   3.2.2 GQA (gqa): [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)

   3.2.3 OCR-VQA (ocr_vqa): [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing)

      ⚠️⚠️⚠️ Modify the name of OCR-VQA's images to keep the extension as `.jpg`!

         ```shell
         #!/bin/bash
         ocr_vqa_path="<your-directory-path>"

         find "$target_dir" -type f | while read file; do
             extension="${file##*.}"
             if [ "$extension" != "jpg" ]
             then
                 cp -- "$file" "${file%.*}.jpg"
             fi
         done
         ```

   3.2.4 TextVQA (textvqa): [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip)

   3.2.5 VisualGenome (VG): [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)

## Cheers! Now train your own model!
1. Alignment module pretraining
```
# single GPU
xtuner train ./pretrain.py --deepspeed deepspeed_zero2

# multiple GPU
NPROC_PER_NODE=4 xtuner train ./pretrain.py --deepspeed deepspeed_zero2
```

#### Remember to change the batch size and gradient accumulation parameters to fit your hardware. So your GPU_num * batch_size * gradient_accumulation is roughly equal to mine to reproduce the result.

The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.

This is my loss curve for llava-siglip-internlm2-1_8b-pretrain-v2:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/1vh2gsRzEFXia7zCRRIlz.png)

And the learning rate curve:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/CdNSJcLCv9MmYG88AJFwB.png)

2. Instruction following fine-tuning
```
NPROC_PER_NODE=4 xtuner train ./finetune.py --deepspeed deepspeed_zero2
```
Here is my loss curve (the curve fluctuates strongly because the batch size is small, and I only record batch loss instead of epoch loss):
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/DG1ac7BeaVTrfKqJHA2u8.png)

And the learning rate curve:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/rbuVFeTa04qTbN5j_QF64.png)

## Transfer the checkpoints to Huggingface safetensor format
```
xtuner convert pth_to_hf ./finetune.py ./work_dirs/iter_xxx.pth ./my_lora_and_projector
```
The adapter still need to be used with the internlm/internlm2-1_8b and the vision encoder. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).

## MMBench Evaluation
You can first download the MMBench data:
```
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_EN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_EN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_CN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_CN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/CCBench.tsv
```
Then run:
```
NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-1_8b \
--visual-encoder google/siglip-so400m-patch14-384 \
--llava ./my_lora_and_projector \
--prompt-template internlm2_chat \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate it on other benckmarks.

## Deployment
Xtuner team is developing HF chatbot (based on Huggingface transformers) and LMDeploy chatbot (based on TurboMind). I am waiting for their final version of API.