StarCycle commited on
Commit
5deb5a9
1 Parent(s): 59f0550

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -218,7 +218,7 @@ And the learning rate curve:
218
  ```
219
  xtuner convert pth_to_hf ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu4_finetune.py ./work_dirs/iter_xxx.pth ./my_lora_and_projector
220
  ```
221
- The adapter still need to be used with the internlm/internlm2-chat-7b and facebook/dinov2-large models. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
222
 
223
  ## MMBench Evaluation
224
  You can first download the MMBench data:
 
218
  ```
219
  xtuner convert pth_to_hf ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu4_finetune.py ./work_dirs/iter_xxx.pth ./my_lora_and_projector
220
  ```
221
+ The adapter still need to be used with the internlm/internlm2-chat-7b and the vision encoder. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
222
 
223
  ## MMBench Evaluation
224
  You can first download the MMBench data: