StarCycle's picture
Update README.md
f0164d3 verified
|
raw
history blame
2.61 kB
metadata
license: apache-2.0

Model

llava-clip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from internlm2-chat-1_8b and CLIP-ViT-Large-patch14-336 with LLaVA-Pretrain by Xtuner.

The total size of the model is around 2.2B, which is suitable for embedded applications like robotics.

I just finished the pretrain phase of the model. I will release the full finetuned model soon. You can also finetune your own version based on the checkpoint here.

Installation

git clone https://github.com/InternLM/xtuner
pip install -e ./xtuner[deepspeed]

Common Errors

1.

command error: 'libGL.so.1: cannot open shared object file: No such file or directory'!

You can solve it by

# For Ubuntu
sudo apt-get update
sudo apt-get install libgl1-mesa-glx

# For CentOS and Fedora
sudo yum install mesa-libGL
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
        Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.

You can solve it by reinstall numpy.

ImportError: 
InternLM2Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the

You just need

pip install protobuf
  1. To use tensorboard to visualize the training loss curve:
pip install future tensorboard 

Data prepration

  1. File structure
./data/llava_data
β”œβ”€β”€ LLaVA-Pretrain
    β”œβ”€β”€ blip_laion_cc_sbu_558k.json
    β”œβ”€β”€ blip_laion_cc_sbu_558k_meta.json
    └── images
  1. Pretrain Data

LLaVA-Pretrain

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co./datasets/liuhaotian/LLaVA-Pretrain --depth=1
  1. Finetune Data Please check the final release version

Cheers! Now train your own model!

  1. Alignment module pretraining
NPROC_PER_NODE=8 xtuner train ./llava_internlm2_chat_7b_dinov2_e1_gpu8_pretrain.py --deepspeed deepspeed_zero2

The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.

  1. Instruction following fine-tuning Please check the final release version