qazimbhat1 commited on
Commit
56cb2bd
β€’
1 Parent(s): acbaabd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +207 -0
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ tags:
8
+ - nlp
9
+ - llm
10
+ - mllm
11
+ datasets:
12
+ - MBZUAI/Web2Code
13
+ ---
14
+
15
+ # CrystalChat-7B-Web2Code: a fully-reproducible vision large language model based on CrystalChat-7B LLM for webpage code generation
16
+
17
+ ## Model Description
18
+
19
+ CrystalChat-7B based multi-modal large language model (MLLM) mimics the training recipe used for Vicuna-7B based [LLaVa-v1.5](https://huggingface.co/docs/transformers/main/model_doc/llava). CrystalChat-7B based MLLMs models are entirely transparent, having open-sourced all materials, including code, data, model checkpoint, intermediate results, and more at [TODO: Add paper link](). CrystalChat-7B-Web2Code MLLM is specialized in webpage images-to-html code generation.
20
+
21
+
22
+ ## Evaluations
23
+
24
+ ## Webpage Code Generation Benchmark
25
+
26
+ | LLM Backbone | DWCG | DWU | DWCG<sub>R</sub> | DWU<sub>R</sub> | VSA ↑ | CAD ↑ | TCC ↑ | UII ↑ | Overall ↑ |
27
+ |------------------------|------|-----|------------------|------------------|--------|--------|--------|--------|------------|
28
+ | **CrystalChat-7B** | | | | | 4.714 | 4.572 | 4.865 | 5.147 | 4.825 |
29
+ | | βœ“ | | | | 7.900 | 8.001 | 8.204 | 8.215 | 8.080 |
30
+ | | βœ“ | βœ“ | | | 7.900 | 8.001 | 8.204 | 8.215 | 8.080 |
31
+ | | βœ“ | βœ“ | βœ“ | βœ“ | **8.384** | **8.287** | **8.417** | **8.488** | **8.394** |
32
+ | **Vicuna-7B** | | | | | 3.042 | 3.250 | 3.333 | 3.167 | 3.198 |
33
+ | | βœ“ | | | | 6.871 | 6.660 | 6.589 | 6.897 | 6.754 |
34
+ | | | βœ“ | | | 3.898 | 3.489 | 3.340 | 3.651 | 3.595 |
35
+ | | βœ“ | βœ“ | βœ“ | βœ“ | **7.876** | **7.687** | **7.267** | **7.563** | **7.598** |
36
+
37
+ **Table 1:** The performance of different LLM backbones under various data configurations on our Webpage Code Generation Benchmark (WCGB). "VSA" denotes Visual Structure and Alignment, "CAD" represents Color and Aesthetic Design, "TCC" represents Textual and Content Consistency, and "UII" denotes User Interface and Interactivity.
38
+
39
+ ## Webpage Understanding Accuracy
40
+
41
+ | LLM Backbone | DWCG | DWU | DWCG<sub>R</sub> | DWU<sub>R</sub> | Accuracy (%) |
42
+ |------------------------|------|-----|------------------|------------------|--------------|
43
+ | **CrystalChat-7B** | | | | | 73.94 |
44
+ | | βœ“ | βœ“ | | | 73.48 |
45
+ | | βœ“ | βœ“ | βœ“ | βœ“ | **74.14** |
46
+ | **Vicuna-7B** | | | | | 71.12 |
47
+ | | βœ“ | | | | 68.11 |
48
+ | | | βœ“ | | | 70.82 |
49
+ | | βœ“ | βœ“ | βœ“ | βœ“ | **71.23** |
50
+
51
+ **Table 2:** The accuracy of webpage understanding under various data configurations and LLM backbones.
52
+ All models are instruction-tuned and evaluated on our WUB benchmark. We note that the general domain data (i.e., LLaVA) is included in all data configuration as default.
53
+
54
+
55
+ ### About CrystalChat-7B-Web2Code:
56
+ * 7 billion parameter LLM
57
+ * CLIP ViT-L/14-336px vision encoder
58
+ * Languages: English
59
+ * Models Released: CrystalChat-7B-Web2Code
60
+ * Trained in 2 stages
61
+ * License: MIT
62
+
63
+
64
+
65
+ ## Evaluation
66
+
67
+ General Evaluation Metrics for MLLMs. MME serves as an extensive evaluative benchmark,
68
+ aiming to assess perceptual and cognitive capability of MLLMs within 14 sub-tasks. Additionally, we also evaluate the performance of our models on text-oriented visual question answering tasks employing a diverse set of benchmark datasets including ScienceQA and TextVQA. Furthermore, we assess our models’ ability toward anti-hallucination through POPE.
69
+
70
+ | LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
71
+ |-----------------------------------|---------|--------|-------|--------|---------|
72
+ | CrystalCoder-7B | 1359.83 | 238.92 | 86.182 | 64.15 | 50.39 |
73
+ | CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
74
+ | Vicuna-7B | **1481.12** | 302.85 | **87.174** | **67.97** | 56.49 |
75
+
76
+ **Table 3:** Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
77
+
78
+
79
+
80
+ ## Data and Training Details
81
+
82
+ ### Pretrain Data
83
+ LLaVA Visual Instruct Pretrain LCS-558K is a filtered subset of the LAION, CC, and SBU datasets, featuring a more balanced distribution of concept coverage. The file includes multimodal synthesized conversations generated from image-caption pairs by incorporating randomly selected instructions such as "Describe this image." It is used for pretraining in LLaVA, with the raw CC-3M caption serving as the default answer.
84
+
85
+ TO-DO: Add new short caption data
86
+ ### Finetune Data
87
+ The finetuning data contains the following:
88
+
89
+ #### LLaVa Finetuning Data
90
+ The dataset chosen was created by LLaVA with academic-task-oriented VQA data mixture and data from ShareGPT. LLaVA Visual Instruct 150K is a dataset of GPT-generated multimodal instruction-following data. It is designed for visual instruction tuning and aims to develop large multimodal models with capabilities akin to GPT-4 in both vision and language.
91
+
92
+ <!-- The full data sequence can be found [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) -->
93
+
94
+ | Data | Size | Response formatting prompts |
95
+ |---------------|------|--------------------------------------------------------------------------|
96
+ | LLaVA [36] | 158K | – |
97
+ | ShareGPT [46] | 40K | – |
98
+ | VQAv2 [19] | 83K | Answer the question using a single word or phrase. |
99
+ | GQA [21] | 72K | Answer the question using a single word or phrase. |
100
+ | OKVQA [41] | 9K | Answer the question using a single word or phrase. |
101
+ | OCRVQA [42] | 80K | Answer the question using a single word or phrase. |
102
+ | A-OKVQA [45] | 66K | Answer with the option’s letter from the given choices directly. |
103
+ | TextCaps [47] | 22K | Provide a one-sentence caption for the provided image. |
104
+ | RefCOCO [24, 40] | 48K | Note: randomly choose between the two formats. Provide a short description for this region. |
105
+ | VG [25] | 86K | Provide the bounding box coordinate of the region this sentence describes. |
106
+ | **Total** | **665K** | |
107
+
108
+ **Table 4:** Instruction-following Data Mixture of LLaVA-1.5.*
109
+
110
+ #### Web2Code Data
111
+
112
+ The Web2Code instruction tuning dataset was released in [Web2Code: A Large-scale Webpage-to-Code Dataset
113
+ and Evaluation Framework for Multimodal LLMs](TODO: Add link). The dataset construction and instruction generation process involves four key components:
114
+
115
+ DWCG: We created new webpage image-code pair data DWCG by generating high-quality HTML webpage-code pairs following the CodeAlpaca prompt using GPT-3.5 and converting them into instruction-following data.
116
+
117
+ DWCG<sub>R</sub>: We refined existing webpage code generation data by transforming existing datasets, including WebSight and Pix2Code, into an instruction-following data format similar to LLaVA data, so they can be used as instruction-following data to train MLLMs.
118
+
119
+ DWU: We created new text question-answer pair data by generating a new question-answer pair dataset utilizing our new GPT-3.5 generated data for webpage understanding.
120
+
121
+ DWU<sub>R</sub>: We refined the WebSRC question-answer data to improve its quality using GPT-4.
122
+
123
+ ### Code Datasets
124
+
125
+ | Dataset | DWCG (ours) | DWCG<sub>R</sub> (ours) |
126
+ |---------|-------------|-------------------|
127
+ | **Instruction** | βœ“ | βœ“ |
128
+ | **Source** | Synthetic | Synthetic |
129
+ | **Size** | 60K | 824.7K |
130
+ | **Avg Length (tokens)** | 471.8Β±162.3 | 652.85Β±157.0 |
131
+ | **Avg Tag Count** | 28.1Β±10.6 | 35.3Β±9.0 |
132
+ | **Avg DOM Depth** | 5.3Β±1.0 | 6.5Β±1.0 |
133
+ | **Avg Unique Tags** | 13.6Β±2.7 | 13.5Β±2.5 |
134
+
135
+ **Table 5:** DWCG is a newly generated GPT-3.5-based dataset, while DWCG<sub>R</sub> is the refined dataset that utilizes WebSight and Pix2Code datasets*
136
+
137
+
138
+ ### Webpage Understanding Datasets
139
+
140
+ | Dataset | DWU | DWU<sub>R</sub> |
141
+ |---------------|---------|-----------------|
142
+ | **Instruction** | βœ“ | βœ“ |
143
+ | **Size** | 243.5K | 51.5K |
144
+
145
+ **Table 6:** Distribution of DWU and DWU<sub>R</sub> datasets. Both datasets include high-quality question-answer pairs for webpage understanding.*
146
+
147
+
148
+
149
+ ## Examples
150
+
151
+
152
+ Example 1:
153
+
154
+ <center><img src="ori.png" alt="Original Input image"/></center>
155
+
156
+ **Image 1:** Original Input Image.
157
+
158
+ <center><img src="crystalchat.png" alt="CrsytalChat-7B model generated output"/></center>
159
+
160
+ **Image 2:** CrystalChat-7B-Web2Code model output.
161
+
162
+ Example 2:
163
+ <center><img src="hand_draw1_.png" alt="CrsytalChat-7B model generated output"/></center>
164
+
165
+ **Image 3:** Hand-drawn webpage input to CrystalChat-7B-Web2Code generated output.
166
+
167
+ ## Loading Crystal
168
+ ```python
169
+ from transformers import AutoModelForCausalLM, AutoTokenizer
170
+
171
+ tokenizer = AutoTokenizer.from_pretrained(
172
+ "LLM360/CrystalChat-7B-MLLM",
173
+ padding_side="right",
174
+ trust_remote_code=True)
175
+
176
+ model = AutoModelForCausalLM.from_pretrained(
177
+ "LLM360/CrystalChat-7B-MLLM",
178
+ trust_remote_code=True,
179
+ torch_dtype=torch.float16,
180
+ device_map='auto',
181
+ low_cpu_mem_usage=True
182
+ )
183
+ ```
184
+
185
+
186
+
187
+ ## LLM-360
188
+ LLM-360 is an open research lab enabling community-owned AGI through open-source large model research and development.
189
+
190
+ Crystal-based Models enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development.
191
+
192
+ We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high-quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators.
193
+
194
+ [Visit us](https://www.llm360.ai/)
195
+
196
+ ## Citation
197
+
198
+ **BibTeX:**
199
+
200
+ ```bibtex
201
+ @article{yun2024web2code,
202
+ title={Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs},
203
+ author={Yun, Sukmin and Lin, Haokun and Thushara, Rusiru and Bhat, Mohammad Qazim and Wang, Yongxin and Jiang, Zutao and Deng, Mingkai and Wang, Jinhong and Tao, Tianhua and Li, Junbo and others},
204
+ journal={arXiv preprint arXiv:2406.20098},
205
+ year={2024}
206
+ }
207
+ ```