emon-j commited on
Commit
3f2f86f
1 Parent(s): e011873

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # LLaVA Compress model weights to int4 using NNCF
2
+ [LLaVA](https://llava-vl.github.io) (Large Language and Vision Assistant) is large multimodal model that aims to develop a general-purpose visual assistant that can follow both language and image instructions to complete various real-world tasks.
3
+
4
+ LLaVA connects pre-trained [CLIP ViT-L/14](https://openai.com/research/clip) visual encoder and large language model like Vicuna, LLaMa v2 or MPT, using a simple projection matrix.
5
+ ![vlp_matrix.png](https://llava-vl.github.io/images/llava_arch.png)