StarCycle's picture
Update README.md
6ce398d verified
|
raw
history blame
9.37 kB
---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: visual-question-answering
---
## Model
llava-clip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from [internlm2-chat-1_8b](https://huggingface.co./internlm/internlm2-chat-1_8b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co./openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct-150K](https://huggingface.co./datasets/liuhaotian/LLaVA-Instruct-150K) by [Xtuner](https://github.com/InternLM/xtuner). The pretraining phase took 16 hours on a single Nvidia A6000 ada GPU (see this [intermediate checkpoint](https://huggingface.co./StarCycle/llava-clip-internlm2-1_8b-pretrain-v1/)). The finetuning phase took 16 hours on 4 Nvidia A6000 GPU.
The total size of the model is around 2.2B, which is suitable for embedded applications like robotics.
I have not carefully tune the hyperparameters during training. If you have any idea to improve it, please open an issue or just send an email to [email protected]. You are welcomed!
## Example
![5bb2f23dd595d389e6a9a0aadebd87c.png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/iOFZOwLGfEByCQ_2EkR7y.png)
Explain the photo in English and Chinese:
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/mbrdKMw-lAHRt2dFHWoGW.jpeg)
## Rank
In submission...
## Results
Model | MMBench Test (EN) | MMBench Dev (EN) | MMBench Test (CN) | MMBench Dev (CN) | CCBench Dev
------------- | ------------- | ------------- | ------------- | ------------- | -------------
LLaVA-v1.5-7B | 67.7 | 69.2 | 61.0 | 59.7 | 28.4
LLaVA-InternLM-7B | 69.0 | 68.5 | 66.7 | 63.8 | 37.3
LLaVA-InternLM2-7B | 73.3 | 74.6 | 71.7 | 72.0 | 42.5
Bunny-3B | 69.2 | 68.6 | - | - | -
MiniCPM-V | 64.1 | 67.9 | 62.6 | 65.3 | 41.4
llava-clip-internlm2-1_8b-v1 | 63.3 | 63.1 | 63.6 | 61.7 | 35.3
## Installation
```
git clone https://github.com/InternLM/xtuner
pip install -e ./xtuner[deepspeed]
apt install git-lfs
```
## Chat
```
xtuner chat internlm/internlm2-chat-1_8b \
--visual-encoder openai/clip-vit-large-patch14-336 \
--llava ./lora_and_projector \
--prompt-template internlm2_chat \
--image $IMAGE_PATH
```
## Common Errors
1.
```
command error: 'libGL.so.1: cannot open shared object file: No such file or directory'!
```
You can solve it by
```
# For Ubuntu
sudo apt-get update
sudo apt-get install libgl1-mesa-glx
# For CentOS and Fedora
sudo yum install mesa-libGL
```
2.
```
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
```
You can solve it by reinstall numpy.
3.
```
ImportError:
InternLM2Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
```
You just need
```
pip install protobuf
```
4.
To use tensorboard to visualize the training loss curve:
```
pip install future tensorboard
```
5. If your training process is killed during data preprocessing, you can modify the `map_num_proc` in xtuner/xtuner/dataset
/huggingface.py
```
def process(dataset,
do_dataset_tokenization=True,
tokenizer=None,
max_length=None,
dataset_map_fn=None,
template_map_fn=None,
max_dataset_length=None,
split='train',
remove_unused_columns=False,
rename_maps=[],
shuffle_before_pack=True,
pack_to_max_length=True,
use_varlen_attn=False,
input_ids_with_output=True,
with_image_token=False,
map_num_proc=32): # modify it to a smaller number, e.g., 4
```
6. If you fail to load the model, check whether you installed git-lfs and actually downloaded the model file.
## Data prepration
1. File structure
```
# . means the llava-dinov2-internlm2-7b-v1 folder you clone
./data/llava_data
β”œβ”€β”€ LLaVA-Pretrain
β”‚Β Β  β”œβ”€β”€ blip_laion_cc_sbu_558k.json
β”‚Β Β  β”œβ”€β”€ blip_laion_cc_sbu_558k_meta.json
β”‚Β Β  └── images
β”œβ”€β”€ LLaVA-Instruct-150K
β”‚Β Β  └── llava_v1_5_mix665k.json
└── llava_images
Β Β  β”œβ”€β”€ coco
Β Β  β”‚ └── train2017
Β Β  β”œβ”€β”€ gqa
Β Β  β”‚ └── images
Β Β  β”œβ”€β”€ ocr_vqa
Β Β  β”‚ └── images
Β Β  β”œβ”€β”€ textvqa
Β Β  β”‚ └── train_images
Β Β  └── vg
Β Β  Β Β  β”œβ”€β”€ VG_100K
Β Β  └── VG_100K_2
```
2. Pretrain Data
LLaVA-Pretrain
```shell
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co./datasets/liuhaotian/LLaVA-Pretrain --depth=1
```
3. Finetune Data
3.1 Text data
LLaVA-Instruct-150K
```shell
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co./datasets/liuhaotian/LLaVA-Instruct-150K --depth=1
```
3.2 Image data
3.2.1 COCO (coco): [train2017](http://images.cocodataset.org/zips/train2017.zip)
3.2.2 GQA (gqa): [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)
3.2.3 OCR-VQA (ocr_vqa): [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing)
⚠️⚠️⚠️ Modify the name of OCR-VQA's images to keep the extension as `.jpg`!
```shell
#!/bin/bash
ocr_vqa_path="<your-directory-path>"
find "$target_dir" -type f | while read file; do
extension="${file##*.}"
if [ "$extension" != "jpg" ]
then
cp -- "$file" "${file%.*}.jpg"
fi
done
```
3.2.4 TextVQA (textvqa): [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip)
3.2.5 VisualGenome (VG): [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)
## Cheers! Now train your own model!
1. Alignment module pretraining
```
# single GPU
xtuner train ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu1_pretrain.py --deepspeed deepspeed_zero2
# multiple GPU
NPROC_PER_NODE=4 xtuner train ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu1_pretrain.py --deepspeed deepspeed_zero2
```
#### Remember to change the batch size and gradient accumulation parameters to fit your hardware. So your GPU_num * batch_size * gradient_accumulation is roughly equal to mine to reproduce the result.
The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.
This is my loss curve for llava-clip-internlm2-1_8b-pretrain-v1:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/iNxPxfOvSJq8ZPz8uP_sP.png)
And the learning rate curve:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/U1U9Kapcd6AIEUySvt2RS.png)
2. Instruction following fine-tuning
```
NPROC_PER_NODE=4 xtuner train ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu4_finetune.py --deepspeed deepspeed_zero2
```
Here is my loss curve (the curve fluctuates strongly because the batch size is small, and I only record batch loss instead of epoch loss):
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/kby2Y1dixeTaALliZ4pJa.png)
And the learning rate curve:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/7ue98bikCOU7ub2jEHrom.png)
## Transfer the checkpoints to Huggingface safetensor format
```
xtuner convert pth_to_hf ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu4_finetune.py ./work_dirs/iter_xxx.pth ./my_lora_and_projector
```
The adapter still need to be used with the internlm/internlm2-chat-7b and facebook/dinov2-large models. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
## MMBench Evaluation
You can first download the MMBench data:
```
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_EN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_EN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_CN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_CN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/CCBench.tsv
```
Then run:
```
NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-chat-1_8b \
--visual-encoder openai/clip-vit-large-patch14-336 \
--llava ./my_lora_and_projector \
--prompt-template internlm2_chat \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate it on other benckmarks.
## Deployment
Xtuner team is developing HF chatbot (based on Huggingface transformers) and LMDeploy chatbot (based on TurboMind). I am waiting for their final version of API.