jakezhao2024
commited on
Commit
•
75243fa
1
Parent(s):
c1ab110
minor format change
Browse files
README.md
CHANGED
@@ -5,24 +5,32 @@
|
|
5 |
We developed and released TableGPT2-7B, a large-scale decoder specifically tailored for data-intensive tasks, with a focus on interpreting and analyzing tabular data. TableGPT2-7B is designed to bridge the gap between conventional LLM capabilities and the real-world demands of tabular/structured data tasks, such as those in business intelligence (BI), automated data-driven analysis, and application tasks tightly involving databases or data warehouses.
|
6 |
|
7 |
**Model Developers**
|
|
|
8 |
Zhejiang University
|
9 |
|
10 |
**Variations**
|
|
|
11 |
TableGPT2 is available in two configurations—7B and 72B parameters—both derived from the Qwen2.5 model family and optimized for handling structured data in tabular formats. Currently, we have released the 7B version to the public.
|
12 |
|
13 |
-
**Input**
|
|
|
14 |
TableGPT2-7B accepts both text and tabular data as input.
|
15 |
|
16 |
**Output**
|
|
|
17 |
TableGPT2-7B produces text-based outputs, specifically optimized for coding tasks, data interpretation, and BI-focused question answering.
|
18 |
|
19 |
**Language**
|
|
|
20 |
Our model places a strong emphasis on Chinese corpora, and currently, queries in other languages may have limited support.
|
21 |
|
22 |
**Other Requirements**
|
|
|
23 |
We highly recommend exploring [our repository on GitHub](https://github.com/tablegpt/tablegpt-agent), where users can integrate this model into our agent workflow for enhanced performance.
|
24 |
|
25 |
-
**Model Architecture**
|
|
|
|
|
26 |
|
27 |
For now, the standalone decoder is open-sourced and fully functional without having to require assistance from the encoder. The encoder is currently under preparation, pending engineering considerations, primarily because we hope to provide a tighter integration with DeepSpeed and vLLM.
|
28 |
|
@@ -32,23 +40,29 @@ For now, the standalone decoder is open-sourced and fully functional without hav
|
|
32 |
| TableGPT2-7B | Multimodal data sources and BI-specific examples | 7B | 128K | 86B tokens CPT, 2.36M SFT samples | 593.8K tables |
|
33 |
|
34 |
**Status**
|
|
|
35 |
This model is static, trained on an offline dataset. Future versions may be released to enhance its performance on specialized tasks.
|
36 |
|
37 |
**License**
|
|
|
38 |
The TableGPT2-7B license permits both research and commercial use, with further details available in the [GitHub repository](https://github.com/tablegpt/tablegpt-agent).
|
39 |
|
40 |
**Research Paper**
|
|
|
41 |
TableGPT2-7B is introduced and validated in the paper "[TableGPT2: A Large Multimodal Model with Tabular Data Integration](URL_TODO)" available on arXiv.
|
42 |
|
43 |
**Where to send questions or comments about the model**
|
|
|
44 |
Inquiries and feedback are welcome at [[email protected]](mailto:[email protected]).
|
45 |
|
46 |
## Training Data
|
47 |
|
48 |
**Overview**
|
|
|
49 |
Training for TableGPT2-7B involved more than 593,800 curated tables, over 86 billion tokens for continual pretraining (CPT) and the construction of over 2.36 million high-quality query-table-output tuples for supervised fine-tuning. This extensive dataset aims to meet the rigorous demands of modern applications involving structured or tabular data.
|
50 |
|
51 |
**Data Freshness**
|
|
|
52 |
The training data has a cutoff of October 2024.
|
53 |
|
54 |
## Evaluation Results
|
|
|
5 |
We developed and released TableGPT2-7B, a large-scale decoder specifically tailored for data-intensive tasks, with a focus on interpreting and analyzing tabular data. TableGPT2-7B is designed to bridge the gap between conventional LLM capabilities and the real-world demands of tabular/structured data tasks, such as those in business intelligence (BI), automated data-driven analysis, and application tasks tightly involving databases or data warehouses.
|
6 |
|
7 |
**Model Developers**
|
8 |
+
|
9 |
Zhejiang University
|
10 |
|
11 |
**Variations**
|
12 |
+
|
13 |
TableGPT2 is available in two configurations—7B and 72B parameters—both derived from the Qwen2.5 model family and optimized for handling structured data in tabular formats. Currently, we have released the 7B version to the public.
|
14 |
|
15 |
+
**Input**
|
16 |
+
|
17 |
TableGPT2-7B accepts both text and tabular data as input.
|
18 |
|
19 |
**Output**
|
20 |
+
|
21 |
TableGPT2-7B produces text-based outputs, specifically optimized for coding tasks, data interpretation, and BI-focused question answering.
|
22 |
|
23 |
**Language**
|
24 |
+
|
25 |
Our model places a strong emphasis on Chinese corpora, and currently, queries in other languages may have limited support.
|
26 |
|
27 |
**Other Requirements**
|
28 |
+
|
29 |
We highly recommend exploring [our repository on GitHub](https://github.com/tablegpt/tablegpt-agent), where users can integrate this model into our agent workflow for enhanced performance.
|
30 |
|
31 |
+
**Model Architecture**
|
32 |
+
|
33 |
+
TableGPT2-7B is built upon the Qwen2.5 architecture and includes specialized encoding for tabular data. It features a unique semantic encoder designed to interpret tabular data, capturing insights from rows, columns, and entire tables. Continual Pretraining (CPT) and Supervised Fine-Tuning (SFT) have been applied to equip the model for real-world BI applications and complex query processing.
|
34 |
|
35 |
For now, the standalone decoder is open-sourced and fully functional without having to require assistance from the encoder. The encoder is currently under preparation, pending engineering considerations, primarily because we hope to provide a tighter integration with DeepSpeed and vLLM.
|
36 |
|
|
|
40 |
| TableGPT2-7B | Multimodal data sources and BI-specific examples | 7B | 128K | 86B tokens CPT, 2.36M SFT samples | 593.8K tables |
|
41 |
|
42 |
**Status**
|
43 |
+
|
44 |
This model is static, trained on an offline dataset. Future versions may be released to enhance its performance on specialized tasks.
|
45 |
|
46 |
**License**
|
47 |
+
|
48 |
The TableGPT2-7B license permits both research and commercial use, with further details available in the [GitHub repository](https://github.com/tablegpt/tablegpt-agent).
|
49 |
|
50 |
**Research Paper**
|
51 |
+
|
52 |
TableGPT2-7B is introduced and validated in the paper "[TableGPT2: A Large Multimodal Model with Tabular Data Integration](URL_TODO)" available on arXiv.
|
53 |
|
54 |
**Where to send questions or comments about the model**
|
55 |
+
|
56 |
Inquiries and feedback are welcome at [[email protected]](mailto:[email protected]).
|
57 |
|
58 |
## Training Data
|
59 |
|
60 |
**Overview**
|
61 |
+
|
62 |
Training for TableGPT2-7B involved more than 593,800 curated tables, over 86 billion tokens for continual pretraining (CPT) and the construction of over 2.36 million high-quality query-table-output tuples for supervised fine-tuning. This extensive dataset aims to meet the rigorous demands of modern applications involving structured or tabular data.
|
63 |
|
64 |
**Data Freshness**
|
65 |
+
|
66 |
The training data has a cutoff of October 2024.
|
67 |
|
68 |
## Evaluation Results
|