tianshuzhang commited on
Commit
f3109a2
1 Parent(s): 283df9d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md CHANGED
@@ -1,3 +1,78 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ datasets:
6
+ - osunlp/TableInstruct
7
+ language:
8
+ - en
9
+ ---
10
+ # 🦣 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
11
+
12
+ Project Page: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
13
+
14
+ Paper: [https://arxiv.org/abs/2311.09206](https://arxiv.org/abs/2311.09206)
15
+
16
+ Code: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
17
+
18
+
19
+ ## Introduction
20
+ We introduce TableLlama, an open-source large generalist model specifically tailored for various table-based tasks. The TableLlama model is trained on 🤗 [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), a meticulously curated instruction tuning dataset for tables. TableLlama is tuned on 2.6 million table-based task data, and can handle up to 8K context!
21
+
22
+
23
+ ## Model [TableLlama-7B](https://huggingface.co/osunlp/TableLlama/) |
24
+
25
+
26
+ ## Training Data
27
+ The models are trained on the 🤗 [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), which includes a comprehensive table-based instruction tuning dataset that covers a variety of real-world tables and realistic tasks. We include 14 datasets of 11 tasks in total. Check out the dataset card for more details.
28
+
29
+
30
+ ## Training Procedure
31
+ The models are fine-tuned with the TableInstruct dataset using LongLoRA (7B), fully fine-tuning version as the base model, which replaces the vanilla attention mechanism of the original Llama-2 (7B) with shift short attention. The training takes 9 days on 48*A100. Check out our paper for more details.
32
+
33
+ ## Evaluation
34
+ The models are evaluated on 8 in-domain datasets of 8 tasks and 6 out-of-domain datasets of 4 tasks. Here are the results:
35
+
36
+
37
+ ## Usage
38
+ You can use the models through Huggingface's Transformers library.
39
+ Check our Github repo for more advanced use: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
40
+
41
+ ## Prompt Format
42
+ ```
43
+ Below is an instruction that describes a task, paired with an input that provides further context. Write a response that
44
+ appropriately completes the request.
45
+
46
+ ### Instruction:
47
+ {instruction}
48
+
49
+ ### Input:
50
+ {input}
51
+
52
+ ### Question:
53
+ {question}
54
+
55
+ ### Response:
56
+ ```
57
+
58
+ ```
59
+
60
+
61
+
62
+ ## Limitations
63
+ We've tried our best to build table generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the table tasks and datasets. Still not all tablep-based tasks can be covered comprehensively.
64
+
65
+
66
+ ## Citation
67
+ If you use the models, data, or code from this project, please cite the original paper:
68
+
69
+ ```
70
+ @misc{zhang2023tablellama,
71
+ title={TableLlama: Towards Open Large Generalist Models for Tables},
72
+ author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
73
+ year={2023},
74
+ eprint={2311.09206},
75
+ archivePrefix={arXiv},
76
+ primaryClass={cs.CL}
77
+ }
78
+ ```