StarCycle commited on
Commit
9c7e52c
1 Parent(s): 12a0d79

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -3,4 +3,70 @@ license: apache-2.0
3
  datasets:
4
  - liuhaotian/LLaVA-Pretrain
5
  - liuhaotian/LLaVA-Instruct-150K
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  datasets:
4
  - liuhaotian/LLaVA-Pretrain
5
  - liuhaotian/LLaVA-Instruct-150K
6
+ ---
7
+ ## Model
8
+ llava-dinov2-internlm2-7b-v1 is a LLaVA model fine-tuned from [InternLM2-Chat-7B](https://huggingface.co/internlm/internlm2-chat-7b) and [Dinov2-large](https://huggingface.co/facebook/dinov2-large) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner). I thank the help of [Zhihao Lin](https://github.com/LZHgrla) and [pppppM](https://github.com/pppppM) from the Xtuner team. I also thank the Huggingface transformers team for approving [my pull request](https://github.com/huggingface/transformers/pull/28504) so training Dinov2 in bf16 becomes possible.
9
+
10
+ I did not carefully tune the training hyperparameters but the model still show capability to solve some tasks. It shows that a visual encoder can be integrated with an LLM, even when the encoder is not aligned with natural language with contrastive learning like CLIP.
11
+
12
+ ## Results
13
+ Model | MMBench Test (EN) | MMBench Dev (EN) | MMBench Test (CN) | MMBench Dev (CN) | CCBench Dev
14
+ ------------- | ------------- | ------------- | ------------- | ------------- | -------------
15
+ LLaVA-v1.5-7B | 67.7 | 69.2 | 61.0 | 59.7 | 28.4
16
+ LLaVA-InternLM-7B | 69.0 | 68.5 | 66.7 | 63.8 | 37.3
17
+ LLaVA-InternLM2-7B | 73.3 | 74.6 | 71.7 | 72.0 | 42.5
18
+ llava-dinov2-internlm2-7b-v1 | Not acquired yet | 65.2 | Not acquired yet | 61.6 | 45.3
19
+
20
+ ## Quickstart
21
+ ### Installation
22
+ ```
23
+ pip install xtuner[deepspeed]
24
+ cd ./path_to_xtuner
25
+ # Now replace the source code files with the modifed version in modified_xtuner_code directory
26
+ ```
27
+
28
+ ## Chat
29
+ ```
30
+ xtuner chat internlm/internlm2-chat-7b \
31
+ --visual-encoder facebook/dinov2-large\
32
+ --llava ./lora_and_projector \
33
+ --prompt-template internlm2_chat \
34
+ --image $IMAGE_PATH
35
+ --no-streamer
36
+ ```
37
+
38
+ ## Training
39
+ 1. Alignment module pretraining
40
+ ```
41
+ NPROC_PER_NODE=8 xtuner train ./llava_internlm2_chat_7b_dinov2_e1_gpu8_pretrain.py --deepspeed deepspeed_zero2
42
+ ```
43
+ The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.
44
+
45
+ Here is my loss curve:
46
+ ![pretraining loss curve](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/l5TdcjzCJmrCVdNb37Ey3.png)
47
+
48
+ 2. Instruction following fine-tuning
49
+ ```
50
+ NPROC_PER_NODE=8 xtuner train ./llava_internlm2_chat_7b_dinov2_e1_gpu8_finetune.py --deepspeed deepspeed_zero2
51
+ ```
52
+ Here is my loss curve (the curve fluctuates strongly because the batch size is small, and I only record batch loss instead of epoch loss):
53
+ ![4dc9f714efb73ad629baf7462e4ae9a.png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/Yn1imlEutA7zC7tfapT2W.png)
54
+
55
+ ## Transfer the checkpoints to Huggingface safetensor format
56
+ ```
57
+ xtuner convert pth_to_hf ./llava_internlm2_chat_7b_dinov2_e1_gpu8_finetune.py ./work_dirs/epoch_1.pth ./my_lora_and_projector
58
+ ```
59
+ The adapter still need to be used with the internlm/internlm2-chat-7b and facebook/dinov2-large models. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
60
+
61
+ ## MMBench Evaluation
62
+ ```
63
+ xtuner mmbench internlm/internlm2-chat-7b \
64
+ --visual-encoder facebook/dinov2-large \
65
+ --llava ./my_lora_and_projector \
66
+ --prompt-template internlm2_chat \
67
+ --data-path $MMBENCH_DATA_PATH \
68
+ --work-dir $RESULT_PATH
69
+ ```
70
+
71
+ ## Deployment
72
+ This function is still under development.