Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
zixianma commited on
Commit
dc9712e
·
verified ·
1 Parent(s): cb73dd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -9
README.md CHANGED
@@ -1,8 +1,6 @@
1
  ---
2
  dataset_info:
3
  features:
4
- - name: images
5
- sequence: string
6
  - name: id
7
  dtype: string
8
  - name: conversation
@@ -17,20 +15,104 @@ dataset_info:
17
  dtype: string
18
  - name: task_instruction
19
  dtype: string
 
 
20
  splits:
21
  - name: cota_293k
22
  num_bytes: 684640621
23
  num_examples: 293105
24
- - name: cota_815k
25
- num_bytes: 1643764353
26
- num_examples: 815582
27
- download_size: 327551360
28
- dataset_size: 2328404974
29
  configs:
30
  - config_name: default
31
  data_files:
32
  - split: cota_293k
33
  path: data/cota_293k-*
34
- - split: cota_815k
35
- path: data/cota_815k-*
36
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  dataset_info:
3
  features:
 
 
4
  - name: id
5
  dtype: string
6
  - name: conversation
 
15
  dtype: string
16
  - name: task_instruction
17
  dtype: string
18
+ - name: images
19
+ sequence: string
20
  splits:
21
  - name: cota_293k
22
  num_bytes: 684640621
23
  num_examples: 293105
24
+ download_size: 107061603
25
+ dataset_size: 684640621
 
 
 
26
  configs:
27
  - config_name: default
28
  data_files:
29
  - split: cota_293k
30
  path: data/cota_293k-*
 
 
31
  ---
32
+
33
+ # TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action
34
+
35
+ [Paper](https://arxiv.org/pdf/2412.05479) | [Website](https://taco-project.github.io/) | [Code](https://github.com/SalesforceAIResearch/TACO) | [Datasets](https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4)
36
+
37
+ ## Summary
38
+ TLDR: CoTA is a large-scale dataset of synthetic Chains-of-Thought-and-Action (CoTA) generated by multi-modal large language models or programs.
39
+
40
+ ## Load data
41
+ ```
42
+ from datasets import load_dataset
43
+ dataset = load_dataset("agentstudio-family/cota-mantis", split="cota_293k")
44
+ ```
45
+
46
+ ## Dataset Card
47
+
48
+ ### Dataset Details
49
+
50
+ This dataset contains synthetic chains of thoughts and actions involving 15 actions:```OCR```, ```LocalizeObjects```, ```GetObjects```,
51
+ ```EstimateRegionDepth```, ```EstimateObjectDepth```, ```Crop```, ```ZoomIn```, ```QueryLanguageModel```, ```GetImageToImagesSimilarity```, ```GetImageToTextsSimilarity```,
52
+ ```GetTextToImagesSimilarity```, ```DetectFaces```, ```QueryKnowledgeBase```, ```Calculate```, and ```SolveMathEquation```. Additionally, the ```Terminate``` action
53
+ is added for the model to provide a final answer. You can find the detailed statistics of this dataset,
54
+ including the data sources distribution, the average and max number of images and turns below:
55
+
56
+ <img src="https://huggingface.co/datasets/agentstudio-family/cota-mantis/resolve/main/dataset_stats.png" alt="dataset stats" width="800"/>
57
+
58
+ <!-- ### Dataset Sources
59
+ - **Cauldron:**
60
+ - **Mantis-Instruct:**
61
+ -->
62
+ ### Uses
63
+
64
+ <!-- Address questions around how the dataset is intended to be used. -->
65
+ The intended use of this dataset is to finetune multi-modal language models to produce chains of thoughts and actions to answer difficult and complex visual questions.
66
+
67
+ ### Direct Use
68
+
69
+ <!-- This section describes suitable use cases for the dataset. -->
70
+
71
+ You can directly use this dataset to train multi-modal language models with the Mantis codebase. To train LLaVA-OneVision models, please use [cota-llava](https://huggingface.co/collections/Salesforce/taco-models-and-datasets-675333e57dd34a4adc5f3ff4) in the [collection](https://huggingface.co/collections/Salesforce/taco-models-and-datasets-675333e57dd34a4adc5f3ff4).
72
+ To train other multi-modal language models, you might need to adapt the conversation format to work for your particular models.
73
+
74
+ ### Out-of-Scope Use
75
+
76
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
77
+
78
+ This dataset should not be used for testing models.
79
+
80
+ ### Source Data
81
+
82
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
83
+ The source data comes from [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Mantis-Instruct](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct).
84
+ They are collected from various existing datasets, including COCO, AOKVQA, ScienceQA, Visual Genome, etc.
85
+
86
+ #### Data Collection and Processing
87
+
88
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
89
+
90
+ <img src="https://huggingface.co/datasets/agentstudio-family/cota-mantis/resolve/main/data_gen.png" width=1000>
91
+ <!-- ![Dataset generation](dataset_gen.png "Dataset generation process") -->
92
+
93
+
94
+ ## Bias, Risks, and Limitations
95
+
96
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
97
+
98
+ Our dataset has the following limitations:
99
+ - The chains of thoughts and actions are generated by gpt-4o-2024-08-06 and thus inherit its biases;
100
+ - The actions are somewhat limited as they cover mostly vision-centric tools such as DepthEstimation and some generic tools such as QueryKnowledgeBase.
101
+ - Please refer to the paper for additional limitations.
102
+
103
+ ## License
104
+
105
+ The CoTA datasets are licensed under the noncommerical license CC-BY-NC 4.0. Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data. This release is for research purposes only in support of an academic paper.
106
+
107
+ ## Citation
108
+ ```
109
+ @misc{ma2024tacolearningmultimodalaction,
110
+ title={TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action},
111
+ author={Zixian Ma and Jianguo Zhang and Zhiwei Liu and Jieyu Zhang and Juntao Tan and Manli Shu and Juan Carlos Niebles and Shelby Heinecke and Huan Wang and Caiming Xiong and Ranjay Krishna and Silvio Savarese},
112
+ year={2024},
113
+ eprint={2412.05479},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.CV},
116
+ url={https://arxiv.org/abs/2412.05479},
117
+ }
118
+ ```