qazimbhat1 commited on
Commit
e5b4505
1 Parent(s): bfb2d4e

Create Readme_code_model.md

Browse files
Files changed (1) hide show
  1. Readme_code_model.md +184 -0
Readme_code_model.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ tags:
8
+ - nlp
9
+ - llm
10
+ - mllm
11
+ ---
12
+
13
+ # CrystalChat-7B-Web2Code: a fully-reproducible vision large language model based on CrystalChat-7B LLM for webpage code generation
14
+
15
+ ## Model Description
16
+
17
+ CrystalChat-7B based multi-modal large language model (MLLM) mimics the training recipe used for Vicuna-7B based [LLaVa-v1.5](https://huggingface.co/docs/transformers/main/model_doc/llava). CrystalChat-7B based MLLMs models are entirely transparent, having open-sourced all materials, including code, data, model checkpoint, intermediate results, and more at [TODO: Add paper link](). CrystalChat-7B-Web2Code MLLM is specialized in webpage images-to-html code generation.
18
+
19
+
20
+ ### About CrystalChat-7B-Web2Code:
21
+ * 7 billion parameter LLM
22
+ * CLIP ViT-L/14-336px vision encoder
23
+ * Languages: English
24
+ * Models Released: CrystalChat-7B-Web2Code
25
+ * Trained in 2 stages
26
+ * License: ?
27
+
28
+
29
+ Crystal-based models were developed as a collaboration between [MBZUAI](https://mbzuai.ac.ae/institute-of-foundation-models/), [Petuum](https://www.petuum.com/), and [LLM360](https://www.llm360.ai/)????.
30
+
31
+ ## Evaluation
32
+
33
+ General Evaluation Metrics for MLLMs. MME serves as an extensive evaluative benchmark,
34
+ aiming to assess perceptual and cognitive capability of MLLMs within 14 sub-tasks. Additionally, we also evaluate the performance of our models on text-oriented visual question answering tasks employing a diverse set of benchmark datasets including ScienceQA and TextVQA. Furthermore, we assess our models’ ability toward anti-hallucination through POPE.
35
+
36
+ | LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
37
+ |-----------------------------------|---------|--------|-------|--------|---------|
38
+ | CrystalCoder-7B | 1359.83 | 238.92 | 86.182 | 64.15 | 50.39 |
39
+ | CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
40
+ | Vicuna-7B | **1481.12** | 302.85 | **87.174** | **67.97** | 56.49 |
41
+
42
+ *Table 1: Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
43
+
44
+ TODO: Add general and code evaluations once jason confirms
45
+
46
+
47
+ ## Data and Training Details
48
+
49
+ ### Pretrain Data
50
+ LLaVA Visual Instruct Pretrain LCS-558K is a filtered subset of the LAION, CC, and SBU datasets, featuring a more balanced distribution of concept coverage. The file includes multimodal synthesized conversations generated from image-caption pairs by incorporating randomly selected instructions such as "Describe this image." It is used for pretraining in LLaVA, with the raw CC-3M caption serving as the default answer.
51
+
52
+ ### Finetune Data
53
+ The finetuning data contains the following:
54
+
55
+ #### LLaVa Finetuning Data
56
+ The dataset chosen was created by LLaVA with academic-task-oriented VQA data mixture and data from ShareGPT. LLaVA Visual Instruct 150K is a dataset of GPT-generated multimodal instruction-following data. It is designed for visual instruction tuning and aims to develop large multimodal models with capabilities akin to GPT-4 in both vision and language.
57
+
58
+ <!-- The full data sequence can be found [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) -->
59
+
60
+ | Data | Size | Response formatting prompts |
61
+ |---------------|------|--------------------------------------------------------------------------|
62
+ | LLaVA [36] | 158K | – |
63
+ | ShareGPT [46] | 40K | – |
64
+ | VQAv2 [19] | 83K | Answer the question using a single word or phrase. |
65
+ | GQA [21] | 72K | Answer the question using a single word or phrase. |
66
+ | OKVQA [41] | 9K | Answer the question using a single word or phrase. |
67
+ | OCRVQA [42] | 80K | Answer the question using a single word or phrase. |
68
+ | A-OKVQA [45] | 66K | Answer with the option’s letter from the given choices directly. |
69
+ | TextCaps [47] | 22K | Provide a one-sentence caption for the provided image. |
70
+ | RefCOCO [24, 40] | 48K | Note: randomly choose between the two formats. Provide a short description for this region. |
71
+ | VG [25] | 86K | Provide the bounding box coordinate of the region this sentence describes. |
72
+ | **Total** | **665K** | |
73
+
74
+ *Table 2. Instruction-following Data Mixture of LLaVA-1.5.*
75
+
76
+ #### Web2Code Data
77
+
78
+ The Web2Code instruction tuning dataset was released in [Web2Code: A Large-scale Webpage-to-Code Dataset
79
+ and Evaluation Framework for Multimodal LLMs](TODO: Add link). The dataset construction and instruction generation process involves four key components:
80
+
81
+ DWCG: We created new webpage image-code pair data DWCG by generating high-quality HTML webpage-code pairs following the CodeAlpaca prompt using GPT-3.5 and converting them into instruction-following data.
82
+
83
+ DWCG<sub>R</sub>: We refined existing webpage code generation data by transforming existing datasets, including WebSight and Pix2Code, into an instruction-following data format similar to LLaVA data, so they can be used as instruction-following data to train MLLMs.
84
+
85
+ DWU: We created new text question-answer pair data by generating a new question-answer pair dataset utilizing our new GPT-3.5 generated data for webpage understanding.
86
+
87
+ DWU<sub>R</sub>: We refined the WebSRC question-answer data to improve its quality using GPT-4.
88
+
89
+ ### Code Datasets
90
+
91
+ | Dataset | DWCG (ours) | DWCG<sub>R</sub> (ours) |
92
+ |---------|-------------|-------------------|
93
+ | **Instruction** | ✓ | ✓ |
94
+ | **Source** | Synthetic | Synthetic |
95
+ | **Size** | 60K | 824.7K |
96
+ | **Avg Length (tokens)** | 471.8±162.3 | 652.85±157.0 |
97
+ | **Avg Tag Count** | 28.1±10.6 | 35.3±9.0 |
98
+ | **Avg DOM Depth** | 5.3±1.0 | 6.5±1.0 |
99
+ | **Avg Unique Tags** | 13.6±2.7 | 13.5±2.5 |
100
+
101
+ *Table 3. DWCG is a newly generated GPT-3.5-based dataset, while DWCG<sub>R</sub> is the refined dataset that utilizes WebSight and Pix2Code datasets*
102
+
103
+
104
+ ### Webpage Understanding Datasets
105
+
106
+ | Dataset | DWU | DWU<sub>R</sub> |
107
+ |---------------|---------|-----------------|
108
+ | **Instruction** | ✓ | ✓ |
109
+ | **Size** | 243.5K | 51.5K |
110
+
111
+ *Table 4. Distribution of DWU and DWU<sub>R</sub> datasets. Both datasets include high-quality question-answer pairs for webpage understanding.*
112
+
113
+
114
+ #TODO: check if this is needed, if yes, replace with corresponding for code model
115
+ ## Stage 2 - Finetuning
116
+ | Checkpoints | |
117
+ | ----------- | ----------- |
118
+ | [CrystalChat](https://huggingface.co/qazimbhat1/my-model-repo3/tree/main) |
119
+ | [CrystalCoder](https://huggingface.co/qazimbhat1/Crystal-based-MLLM-7B/tree/Crystal-coder-7B) |
120
+
121
+ ## Stage 1 - Pretraining
122
+ | Checkpoints | |
123
+ | ----------- | ----------- |
124
+ | [CrystalChat](https://huggingface.co/qazimbhat1/Crystal-based-MLLM-7B/tree/Crystal-based-MLLM-7B-pretrain) |
125
+ | [CrystalCoder](https://huggingface.co/qazimbhat1/Crystal-based-MLLM-7B/tree/Crystal-coder-7B-pretrain) |
126
+
127
+ [to find all branches: git branch -a]
128
+
129
+ ## Examples
130
+
131
+ TODO: Add image as sample example
132
+
133
+ Example 1:
134
+
135
+ <center><img src="assets/ori.png" alt="Original Input image"/></center>
136
+ *Image 1. Original Input Image.*
137
+
138
+ <center><img src="assets/crystalchat.png" alt="CrsytalChat-7B model generated output"/></center>
139
+ *Image 2. CrystalChat-7B-Web2Code model output.*
140
+
141
+ Example 2:
142
+ <center><img src="assets/hand_draw1.pdf" alt="CrsytalChat-7B model generated output"/></center>
143
+ *Image 3. Hand-drawn webpage input to CrystalChat-7B-Web2Code generated output.*
144
+
145
+ ## Loading Crystal
146
+ ```python
147
+ from transformers import AutoModelForCausalLM, AutoTokenizer
148
+
149
+ tokenizer = AutoTokenizer.from_pretrained(
150
+ "LLM360/CrystalChat-7B-MLLM",
151
+ padding_side="right",
152
+ trust_remote_code=True)
153
+
154
+ model = AutoModelForCausalLM.from_pretrained(
155
+ "LLM360/CrystalChat-7B-MLLM",
156
+ trust_remote_code=True,
157
+ torch_dtype=torch.float16,
158
+ device_map='auto',
159
+ low_cpu_mem_usage=True
160
+ )
161
+ ```
162
+
163
+
164
+
165
+ ## LLM-360
166
+ LLM-360 is an open research lab enabling community-owned AGI through open-source large model research and development.
167
+
168
+ Crystal-based Models enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development.
169
+
170
+ We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high-quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators.
171
+
172
+ [Visit us](https://www.llm360.ai/)
173
+
174
+ ## Citation
175
+
176
+ **BibTeX:**
177
+
178
+ ```bibtex
179
+ @article{
180
+ title={},
181
+ author={},
182
+ year={},
183
+ }
184
+ ```