Update README.md
Browse files## Model
llava-dinov2-internlm2-7b-v1 is a LLaVA model fine-tuned from [InternLM2-Chat-7B](https://huggingface.co./internlm/internlm2-chat-7b) and [Dinov2-large](https://huggingface.co./facebook/dinov2-large) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co./datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner). I thank the help of [Zhihao Lin](https://github.com/LZHgrla) and [pppppM](https://github.com/pppppM) from the Xtuner team. I also thank the Huggingface transformers team for approving [my pull request](https://github.com/huggingface/transformers/pull/28504) so training Dinov2 in bf16 becomes possible.
I did not carefully tune the training hyperparameters but the model still show capability to solve some tasks. It shows that a visual encoder can be integrated with an LLM, even when the encoder is not aligned with natural language with contrastive learning like CLIP.
## Results
Model | MMBench Test (EN) | MMBench Dev (EN) | MMBench Test (CN) | MMBench Dev (CN) | CCBench Dev
------------- | ------------- | ------------- | ------------- | ------------- | -------------
LLaVA-v1.5-7B | 67.7 | 69.2 | 61.0 | 59.7 | 28.4
LLaVA-InternLM-7B | 69.0 | 68.5 | 66.7 | 63.8 | 37.3
LLaVA-InternLM2-7B | 73.3 | 74.6 | 71.7 | 72.0 | 42.5
llava-dinov2-internlm2-7b-v1 | Not acquired yet | 65.2 | Not acquired yet | 61.6 | 45.3
## Quickstart
### Installation
```
pip install xtuner[deepspeed]
cd ./path_to_xtuner
# Now replace the source code files with the modifed version in modified_xtuner_code directory
```
## Chat
```
xtuner chat internlm/internlm2-chat-7b \
--visual-encoder facebook/dinov2-large\
--llava ./lora_and_projector \
--prompt-template internlm2_chat \
--image $IMAGE_PATH
--no-streamer
```
## Training
1. Alignment module pretraining
```
NPROC_PER_NODE=8 xtuner train ./llava_internlm2_chat_7b_dinov2_e1_gpu8_pretrain.py --deepspeed deepspeed_zero2
```
The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.
Here is my loss curve:
![pretraining loss curve](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/l5TdcjzCJmrCVdNb37Ey3.png)
2. Instruction following fine-tuning
```
NPROC_PER_NODE=8 xtuner train ./llava_internlm2_chat_7b_dinov2_e1_gpu8_finetune.py --deepspeed deepspeed_zero2
```
Here is my loss curve (the curve fluctuates strongly because the batch size is small, and I only record batch loss instead of epoch loss):
![4dc9f714efb73ad629baf7462e4ae9a.png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/Yn1imlEutA7zC7tfapT2W.png)
## Transfer the checkpoints to Huggingface safetensor format
```
xtuner convert pth_to_hf ./llava_internlm2_chat_7b_dinov2_e1_gpu8_finetune.py ./work_dirs/epoch_1.pth ./my_lora_and_projector
```
The adapter still need to be used with the internlm/internlm2-chat-7b and facebook/dinov2-large models. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
## MMBench Evaluation
```
xtuner mmbench internlm/internlm2-chat-7b \
--visual-encoder facebook/dinov2-large \
--llava ./my_lora_and_projector \
--prompt-template internlm2_chat \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
## Deployment
This function is still under development.