Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,967 Bytes
6d626d3
7023955
 
 
 
 
 
 
 
 
 
341810b
e46030c
7023955
 
 
 
6d626d3
341810b
be4770e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
341810b
be4770e
341810b
be4770e
 
 
341810b
be4770e
341810b
be4770e
 
 
 
341810b
 
be4770e
 
 
 
341810b
be4770e
 
 
 
 
341810b
be4770e
 
341810b
be4770e
 
341810b
be4770e
341810b
be4770e
341810b
 
be4770e
341810b
 
e46030c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
annotations_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
paperswithcode_id: coco
pretty_name: COCO
size_categories:
- 100K<n<1M
source_datasets:
- https://huggingface.co./datasets/HuggingFaceM4/COCO
task_categories:
- image-classification
---

## General Information

**Title**: COCO-AB

**Description**: 
The COCO-AB dataset is an extension of the COCO 2014 training set, enriched with additional annotation byproducts (AB). 
The data includes 82,765 reannotated images from the original COCO 2014 training set. 
It has relevance in computer vision, specifically in object detection and location. 
The aim of the dataset is to provide a richer understanding of the images (without extra costs) by recording additional actions and interactions from the annotation process.

**Links**:

- [ICCV'23 Paper](https://arxiv.org/abs/2303.17595)
- [Main Repository](https://github.com/naver-ai/NeglectedFreeLunch)
- [COCO Annotation Interface](https://github.com/naver-ai/coco-annotation-tool)


## Collection Process

**Collection Details**:
The additional annotations for the COCO-AB dataset were collected using Amazon Mechanical Turk (MTurk) workers from the US region, due to the task being described in English. 
The task was designed as a human intelligence task (HIT), and the qualification approval rate was set at 90% to ensure the task's quality.
Each HIT contained 20 pages of annotation tasks, each page having a single candidate image to be tagged.
We follow the original annotation interface of COCO as much as possible.
See [GitHub repository](https://github.com/naver-ai/coco-annotation-tool) and [Paper](https://arxiv.org/abs/2303.17595) for further information.


A total of 4140 HITs were completed, with 365 HITs being rejected based on criteria such as recall rate, accuracy of icon location, task completion rate, and verification with database and secret hash code.

**Annotator Compensation**:
Annotators were paid 2.0 USD per HIT. 
The median time taken to complete each HIT was 12.1 minutes, yielding an approximate hourly wage of 9.92 USD. 
This wage is above the US federal minimum hourly wage. 
A total of 8,280 USD was paid to the MTurk annotators, with an additional 20% fee paid to Amazon.

**Annotation Rejection**: 
We rejected a HIT under the following circumstances.

- The recall rate was lower than 0.333.
- The accuracy of icon location is lower than 0.75.
- The annotator did not complete at least 16 out of the 20 pages of tasks.
- The annotation was not found in our database, and the secret hash code for confirming their completion was incorrect.
- In total, 365 out of 4,140 completed HITs (8.8%) were rejected.


**Collection Time**:
The entire annotation collection process took place between January 9, 2022, and January 12, 2022

## Data Schema

```json
{
  "image_id": 459214,
  "originalImageHeight": 428,
  "originalImageWidth": 640,
  "categories": [”car”, “bicycle”],
  "imageHeight": 450,
  "imageWidth": 450,
  "timeSpent": 22283,
  "actionHistories": [
    {"actionType": ”add”,
    "iconType": ”car”,
    "pointTo": {"x": 0.583, "y": 0.588},
    "timeAt": 16686},
    {"actionType": ”add”,
    "iconType": “bicycle”,
    "pointTo": {"x": 0.592, "y": 0.639},
    "timeAt": 16723}
  ],
  "categoryHistories": [
    {"categoryIndex": 1,
    "categoryName": ”Animal”,
    "timeAt": 10815,
    "usingKeyboard": false},
    {"categoryIndex": 10,
    "categoryName": ”IndoorObjects”,
    "timeAt": 19415,
    "usingKeyboard": false}
  ],
  "mouseTracking": [
    {"x": 0.679, "y": 0.862, "timeAt": 15725},
    {"x": 0.717, "y": 0.825, "timeAt": 15731}
  ],
  "worker_id": "00AA3B5E80",
  "assignment_id": "3AMYWKA6YLE80HK9QYYHI2YEL2YO6L",
  "page_idx": 8
}
```

## Usage

One could use the annotation byproducts to improve the model generalisability and robustness.
This is appealing, as the annotation byproducts do not incur extra annotation costs for the annotators.
For more information, refer to our [ICCV'23 Paper](https://arxiv.org/abs/2303.17595).

## Dataset Statistics

Annotators have reannotated 82,765 (99.98%) of 82,783 training images from the COCO 2014 training set.
For those images, we have recorded the annotation byproducts. 
We found that each HIT recalls 61.9% of the list of classes per image, with the standard deviation ±0.118%p.
The average localisation accuracy for icon placement is 92.3% where the standard deviation is ±0.057%p.


## Ethics and Legalities
The crowdsourced annotators were fairly compensated for their time at a rate well above the U.S. federal minimum wage. 
In terms of data privacy, the dataset maintains the same ethical standards as the original COCO dataset.
Worker identifiers were anonymized using a non-reversible hashing function, ensuring privacy.

Our data collection has obtained IRB approval from an author’s institute. 
For the future collection of annotation byproducts, we note that there exist potential risks that annotation byproducts may contain annotators’ privacy. 
Data collectors may even attempt to leverage more private information as byproducts. 
We urge data collectors not to collect or exploit private information from annotators. 
Whenever appropriate, one must ask for the annotators’ consent.

## Maintenance and Updates
This section will be updated as and when there are changes or updates to the dataset.

## Known Limitations
Given the budget constraint, we have not been able to acquire 8+ annotations per sample, as done in the original work.

## Citation Information
```
@inproceedings{han2023iccv,
  title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts},
  author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year = {2023}
}
```