Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
coallaoh commited on
Commit
341810b
·
1 Parent(s): 8a0f2ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -1,3 +1,87 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-classification
5
+ size_categories:
6
+ - n<1K
7
  ---
8
+ ## Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts | [Paper](https://arxiv.org/abs/2303.17595)
9
+
10
+ Dongyoon Han<sup>1*</sup>, Junsuk Choe<sup>2*</sup>, Seonghyeok Chun<sup>3</sup>, John Joon Young Chung<sup>4</sup>
11
+
12
+ Minsuk Chang<sup>5</sup>, Sangdoo Yun<sup>1</sup>, Jean Y. Song<sup>6</sup>, Seong Joon Oh<sup>7&dagger;</sup>
13
+
14
+ <sub>\* Equal contribution</sub> <sub>&dagger;</sub> <sub> Corresponding author </sub>
15
+
16
+ <sup>1</sup> <sub>NAVER AI LAB</sub> <sup>2</sup> <sub>Sogang University</sub> <sup>3</sup> <sub>Dante Company</sub> <sup>4</sup> <sub>University of Michigan</sub> <sup>5</sup> <sub>NAVER AI LAB, currently at Google</sub> <sup>6</sup> <sub>DGIST</sub> <sup>7</sup> <sub>University of T&uuml;bingen</sub>
17
+
18
+ Supervised learning of image classifiers distills human knowledge into a parametric model *f* through pairs of images and corresponding labels (*X*,*Y*). We argue that this simple and widely used representation of human knowledge neglects rich auxiliary information from the annotation procedure, such as the time-series of mouse traces and clicks.
19
+
20
+ <p align=center>
21
+ <img src="https://user-images.githubusercontent.com/7447092/203720567-dc6e1277-84d2-439c-a9f8-879e31c04e6f.png" alt="imagenet-byproduct-sample" width=500px />
22
+ <p/>
23
+
24
+ Our insight is that such **annotation byproducts** *Z* provide approximate human attention that weakly guides the model to focus on the foreground cues, reducing spurious correlations and discouraging shortcut learning.
25
+
26
+ We have created **ImageNet-AB** and **COCO-AB** to verify this:
27
+
28
+ They are ImageNet and COCO training sets enriched with sample-wise annotation byproducts, collected by replicating the respective original annotation tasks.
29
+
30
+ We refer to the new paradigm of training models with annotation byproducts as **learning using annotation byproducts (LUAB)**.
31
+
32
+ <p align=center>
33
+ <img src="https://user-images.githubusercontent.com/7447092/203721515-2aea133d-1a77-4463-8372-5f0e0dbe4d2d.png" alt="luab" width=500px />
34
+ <p/>
35
+
36
+ We show that a simple multitask loss for regressing *Z* together with *Y* already improves the generalisability and robustness of the learned models. Compared to the original supervised learning, LUAB does not require extra annotation costs.
37
+
38
+ ### Dataloader for ImageNet-AB and COCO-AB
39
+
40
+ We provide example dataloaders for the annotation byproducts.
41
+
42
+ * Dataloader for ImageNet-AB: [imagenet_dataloader.ipynb](imagenet_dataloader.ipynb)
43
+ * Dataloader for COCO-AB: [coco_dataloader.ipynb](coco_dataloader.ipynb)
44
+
45
+
46
+ ### Annotation tools for ImageNet and COCO
47
+
48
+ * Annotation tool for ImageNet: [github.com/naver-ai/imagenet-annotation-tool](https://github.com/naver-ai/imagenet-annotation-tool)
49
+ * Annotation tool for COCO: [github.com/naver-ai/coco-annotation-tool](https://github.com/naver-ai/coco-annotation-tool)
50
+
51
+ ### License
52
+
53
+ ```
54
+ MIT License
55
+
56
+ Copyright (c) 2023-present NAVER Cloud Corp.
57
+
58
+ Permission is hereby granted, free of charge, to any person obtaining a copy
59
+ of this software and associated documentation files (the "Software"), to deal
60
+ in the Software without restriction, including without limitation the rights
61
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
62
+ copies of the Software, and to permit persons to whom the Software is
63
+ furnished to do so, subject to the following conditions:
64
+
65
+ The above copyright notice and this permission notice shall be included in all
66
+ copies or substantial portions of the Software.
67
+
68
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
69
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
70
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
71
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
72
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
73
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
74
+ SOFTWARE.
75
+ ```
76
+
77
+ ### Citing our work
78
+
79
+ ```
80
+ @article{han2023arxiv,
81
+ title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts},
82
+ author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon},
83
+ journal={arXiv preprint arXiv:2303.17595},
84
+ year = {2023}
85
+ }
86
+ ```
87
+