Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
hheiden-roots commited on
Commit
83da7ce
·
verified ·
1 Parent(s): 7c9703c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -31
README.md CHANGED
@@ -1,34 +1,75 @@
1
  ---
2
  license: cc-by-4.0
3
- dataset_info:
4
- features:
5
- - name: screen_id
6
- dtype: string
7
- - name: screen_annotation
8
- dtype: string
9
- - name: file_name
10
- dtype: string
11
- - name: image
12
- dtype: image
13
- splits:
14
- - name: train
15
- num_bytes: 1683942681.288
16
- num_examples: 15548
17
- - name: valid
18
- num_bytes: 240110622.938
19
- num_examples: 2311
20
- - name: test
21
- num_bytes: 452042111.53
22
- num_examples: 4217
23
- download_size: 1880354960
24
- dataset_size: 2376095415.756
25
- configs:
26
- - config_name: default
27
- data_files:
28
- - split: train
29
- path: data/train-*
30
- - split: valid
31
- path: data/valid-*
32
- - split: test
33
- path: data/test-*
34
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - image-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - screens
9
+ pretty_name: RICO Screen Annotations
10
+ size_categories:
11
+ - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
+ # Dataset Card for RICO Screen Annotations
14
+
15
+ This is a standardization of Google's Screen Annotation dataset on a subset of RICO screens, as described in their ScreenAI paper.
16
+ Unlike the original, this version transforms integer-based bounding boxes into floating-point-based bounding boxes of 2 decimal precision.
17
+
18
+ ## Dataset Details
19
+
20
+ ### Dataset Description
21
+
22
+ This is an image-to-text annotation format first proscribed in Google's ScreenAI paper.
23
+ The idea is to standardize an expected text output that is reasonable for the model to follow,
24
+ and fuses together things like element detection, referring expression generation/recognition, and element classification.
25
+
26
+ - **Curated by:** Google Research
27
+ - **Language(s) (NLP):** English
28
+ - **License:** CC-BY-4.0
29
+
30
+ ### Dataset Sources
31
+
32
+ - **Repository:** [google-research/screen_annotation](https://github.com/google-research-datasets/screen_annotation/tree/main)
33
+ - **Paper [optional]:** [ScreenAI](https://arxiv.org/abs/2402.04615)
34
+
35
+ ## Uses
36
+
37
+ ### Direct Use
38
+
39
+ Pre-training of multimodal models to better understand screens.
40
+
41
+
42
+ ## Dataset Structure
43
+
44
+ - `screen_id`: Screen ID in the RICO dataset
45
+ - `screen_annotation`: Target output string
46
+ - `image`: The RICO screenshot
47
+
48
+ ## Dataset Creation
49
+
50
+ ### Curation Rationale
51
+
52
+ > The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The mobile screenshots are directly taken from the publicly available Rico dataset. The annotations are in text format, and contain information on the UI elements present on the screen: their type, their location, the text they contain or a short description. This dataset has been introduced in the paper ScreenAI: A Vision-Language Model for UI and Infographics Understanding and can be used to improve the screen understanding capabilities of multimodal (image+text) models.
53
+
54
+ ## Citation
55
+
56
+ **BibTeX:**
57
+
58
+ ```
59
+ @misc{baechler2024screenai,
60
+ title={ScreenAI: A Vision-Language Model for UI and Infographics Understanding},
61
+ author={Gilles Baechler and Srinivas Sunkara and Maria Wang and Fedir Zubach and Hassan Mansoor and Vincent Etter and Victor Cărbune and Jason Lin and Jindong Chen and Abhanshu Sharma},
62
+ year={2024},
63
+ eprint={2402.04615},
64
+ archivePrefix={arXiv},
65
+ primaryClass={cs.CV}
66
+ }
67
+ ```
68
+
69
+ ## Dataset Card Authors
70
+
71
+ Hunter Heidenreich, Roots Automation
72
+
73
+ ## Dataset Card Contact
74
+
75
+ hunter "dot" heidenreich AT rootsautomation `DOT` com