Nagase-Kotono
commited on
Commit
•
eb4a299
1
Parent(s):
bf32684
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- tabtoyou/KoLLaVA-CC3M-Pretrain-595K
|
5 |
+
---
|
6 |
+
# EEVE-LLaVA-2.8B-CLIP_KO-pretrain
|
7 |
+
|
8 |
+
![이미지](https://cdn.donmai.us/sample/d8/91/__nagase_kotono_idoly_pride_drawn_by_sakuranoron__sample-d8915e20211fe88aba61f1c215cc32d1.jpg)
|
9 |
+
|
10 |
+
**This model is a pretrained version of the llava multimodal projector.**
|
11 |
+
|
12 |
+
**You can use this model to finetune [EEVE-Korean-Instruct-2.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) model.**
|
13 |
+
|
14 |
+
**Use [Bingsu/clip-vit-large-patch14-ko](https://huggingface.co/Bingsu/clip-vit-large-patch14-ko)**
|
15 |
+
|
16 |
+
## Hardware
|
17 |
+
***3x NVIDIA A100 40GB***
|
18 |
+
## Software
|
19 |
+
***Using [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA)***
|