qazimbhat1 commited on
Commit
a3abc41
1 Parent(s): aa127ad

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ tags:
8
+ - nlp
9
+ - llm
10
+ - mllm
11
+ ---
12
+
13
+ # CrystalChat-7B-MLLM: a fully-reproducible vision large language model based on CrystalChat-7B LLM
14
+
15
+ Crystal-based models mimics the training recipie used for Vicuna 7B in LLaVA muli... (MLLM). Crystal-based models are entirely transparent, having open-sourced all materials, including code, data, model checkpoint, intermediate results, and more.
16
+
17
+ | LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
18
+ |-----------------------------------|---------|--------|-------|--------|---------|
19
+ | CrystalCoder-7B | 1359.83 | 238.92 | 86.182 | 64.15 | 50.39 |
20
+ | CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
21
+ | Vicuna-7B | **1481.12** | 302.85 | **87.174** | **67.97** | 56.49 |
22
+
23
+ *Table: Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
24
+
25
+
26
+ ## About Crystal:
27
+ * 7 billion parameter LLM
28
+ * CLIP ViT-L/14
29
+ * Tokens: ????
30
+ * Languages: English
31
+ * Models Released: ???? model
32
+ * Trained in 2 stages
33
+ * License: ?
34
+
35
+ Crystal-based models were developed as a collaboration between [MBZUAI](https://mbzuai.ac.ae/institute-of-foundation-models/), [Petuum](https://www.petuum.com/), and [LLM360](https://www.llm360.ai/)????.
36
+
37
+ ## Evaluation
38
+
39
+ General Evaluation Metrics for MLLMs. MME serves as an extensive evaluative benchmark,
40
+ aiming to assess perceptual and cognitive capability of MLLMs within 14 sub-tasks. Additionally, we also evaluate the performance of our models on text-oriented visual question answering tasks employing a diverse set of benchmark datasets including ScienceQA and TextVQA. Furthermore, we assess our models’ ability toward anti-hallucination through POPE.
41
+
42
+
43
+ <center><img src="k2_table_of_tables.png" alt="k2 big eval table"/></center>
44
+
45
+
46
+
47
+ ## Datasets and Mix
48
+
49
+ ### Pretrain Data
50
+ LLaVA Visual Instruct Pretrain LCS-558K is a filtered subset of the LAION, CC, and SBU datasets, featuring a more balanced distribution of concept coverage. The file includes multimodal synthesized conversations generated from image-caption pairs by incorporating randomly selected instructions such as "Describe this image." It is used for pretraining in LLaVA, with the raw CC-3M caption serving as the default answer.
51
+
52
+ ### Finetune
53
+
54
+ The dataset chosen was created by LLaVA with academic-task-oriented VQA data mixture and data from ShareGPT. LLaVA Visual Instruct 150K is a dataset of GPT-generated multimodal instruction-following data. It is designed for visual instruction tuning and aims to develop large multimodal models with capabilities akin to GPT-4 in both vision and language.
55
+
56
+ <!-- The full data sequence can be found [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) -->
57
+
58
+ | Data | Size | Response formatting prompts |
59
+ |---------------|------|--------------------------------------------------------------------------|
60
+ | LLaVA [36] | 158K | – |
61
+ | ShareGPT [46] | 40K | – |
62
+ | VQAv2 [19] | 83K | Answer the question using a single word or phrase. |
63
+ | GQA [21] | 72K | Answer the question using a single word or phrase. |
64
+ | OKVQA [41] | 9K | Answer the question using a single word or phrase. |
65
+ | OCRVQA [42] | 80K | Answer the question using a single word or phrase. |
66
+ | A-OKVQA [45] | 66K | Answer with the option’s letter from the given choices directly. |
67
+ | TextCaps [47] | 22K | Provide a one-sentence caption for the provided image. |
68
+ | RefCOCO [24, 40] | 48K | Note: randomly choose between the two formats. Provide a short description for this region. |
69
+ | VG [25] | 86K | Provide the bounding box coordinate of the region this sentence describes. |
70
+ | **Total** | **665K** | |
71
+
72
+ **Table 7. Instruction-following Data Mixture of LLaVA-1.5.**
73
+
74
+
75
+
76
+ # LLM360 Research Suite
77
+
78
+ ## Stage 2 - Finetuning
79
+ | Checkpoints | |
80
+ | ----------- | ----------- |
81
+ | [CrystalChat](https://huggingface.co/qazimbhat1/my-model-repo3/tree/main) |
82
+ | [CrystalCoder](https://huggingface.co/qazimbhat1/Crystal-based-MLLM-7B/tree/Crystal-coder-7B) |
83
+
84
+ ## Stage 1 - Pretraining
85
+ | Checkpoints | |
86
+ | ----------- | ----------- |
87
+ | [CrystalChat](https://huggingface.co/qazimbhat1/Crystal-based-MLLM-7B/tree/Crystal-based-MLLM-7B-pretrain) |
88
+ | [CrystalCoder](https://huggingface.co/qazimbhat1/Crystal-based-MLLM-7B/tree/Crystal-coder-7B-pretrain) |
89
+
90
+ [to find all branches: git branch -a]
91
+
92
+ # Loading Crystal
93
+ ```python
94
+ from transformers import AutoModelForCausalLM, AutoTokenizer
95
+
96
+ tokenizer = AutoTokenizer.from_pretrained(
97
+ "LLM360/CrystalChat-7B-MLLM",
98
+ padding_side="right",
99
+ trust_remote_code=True)
100
+
101
+ model = AutoModelForCausalLM.from_pretrained(
102
+ "LLM360/CrystalChat-7B-MLLM",
103
+ trust_remote_code=True,
104
+ torch_dtype=torch.float16,
105
+ device_map='auto',
106
+ low_cpu_mem_usage=True
107
+ )
108
+ ```
109
+
110
+
111
+
112
+ ## LLM-360
113
+ LLM-360 is an open research lab enabling community-owned AGI through open-source large model research and development.
114
+
115
+ Crystal-based Models enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development.
116
+
117
+ We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high-quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators.
118
+
119
+ [Visit us](https://www.llm360.ai/)
120
+
121
+ ## Citation
122
+
123
+ **BibTeX:**
124
+
125
+ ```bibtex
126
+ @article{
127
+ title={},
128
+ author={},
129
+ year={},
130
+ }
131
+ ```