StarCycle commited on
Commit
bf4d46e
1 Parent(s): ebf4f8a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -213,6 +213,15 @@ xtuner convert pth_to_hf ./llava_internlm2_chat_7b_dinov2_e1_gpu8_finetune.py ./
213
  The adapter still need to be used with the internlm/internlm2-chat-7b and facebook/dinov2-large models. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
214
 
215
  ## MMBench Evaluation
 
 
 
 
 
 
 
 
 
216
  ```
217
  NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-chat-7b \
218
  --visual-encoder facebook/dinov2-large \
@@ -221,6 +230,7 @@ NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-chat-7b \
221
  --data-path $MMBENCH_DATA_PATH \
222
  --work-dir $RESULT_PATH
223
  ```
 
224
 
225
  ## Deployment
226
  Xtuner team is developing HF chatbot (based on Huggingface transformers) and LMDeploy chatbot (based on TurboMind). I am waiting for their final version of API.
 
213
  The adapter still need to be used with the internlm/internlm2-chat-7b and facebook/dinov2-large models. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
214
 
215
  ## MMBench Evaluation
216
+ You can first download the MMBench data:
217
+ ```
218
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_EN.tsv
219
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_EN.tsv
220
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_CN.tsv
221
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_CN.tsv
222
+ wget https://opencompass.openxlab.space/utils/VLMEval/CCBench.tsv
223
+ ```
224
+ Then run:
225
  ```
226
  NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-chat-7b \
227
  --visual-encoder facebook/dinov2-large \
 
230
  --data-path $MMBENCH_DATA_PATH \
231
  --work-dir $RESULT_PATH
232
  ```
233
+ You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate it on other benckmarks.
234
 
235
  ## Deployment
236
  Xtuner team is developing HF chatbot (based on Huggingface transformers) and LMDeploy chatbot (based on TurboMind). I am waiting for their final version of API.