Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
## Model
|
5 |
-
llava-clip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) by [Xtuner](https://github.com/InternLM/xtuner). The pretraining phase took 16 hours on a single A6000 ada GPU.
|
6 |
|
7 |
The total size of the model is around 2.2B, which is suitable for embedded applications like robotics.
|
8 |
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
## Model
|
5 |
+
llava-clip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) by [Xtuner](https://github.com/InternLM/xtuner). The pretraining phase took 16 hours on a single Nvidia A6000 ada GPU.
|
6 |
|
7 |
The total size of the model is around 2.2B, which is suitable for embedded applications like robotics.
|
8 |
|