Datasets:

Modalities:
Image
Text
ArXiv:
Libraries:
Datasets
License:
davanstrien HF staff commited on
Commit
286635a
1 Parent(s): b7d37c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -14
README.md CHANGED
@@ -74,6 +74,7 @@ This dataset contains a subset of data used in the paper [You Actually Look Twic
74
  - TableZone
75
  - TitlePageZone
76
 
 
77
  ### Supported Tasks and Leaderboards
78
 
79
  - `object-detection`: This dataset can be used to train a model for object-detection on historic document images.
@@ -83,14 +84,14 @@ This dataset contains a subset of data used in the paper [You Actually Look Twic
83
 
84
  This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.
85
 
86
- - The first configuration `YOLO` uses the original format of the data.
87
- - The second configuration converts the YOLO format into a format closer to the `COCO` annotation format. This is done in particular to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection which expect data to be in a COCO style format.
88
 
89
  ### Data Instances
90
 
91
  An example instance from the COCO config:
92
 
93
- ``` python
94
  {'height': 5610,
95
  'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785609D0>,
96
  'image_id': 0,
@@ -141,7 +142,7 @@ An example instance from the COCO config:
141
 
142
  An example instance from the YOLO config:
143
 
144
- ``` python
145
  {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785EFA90>,
146
  'objects': {'bbox': [[2144, 292, 1198, 170],
147
  [1120, 1462, 414, 331],
@@ -159,7 +160,7 @@ An example instance from the YOLO config:
159
  The fields for the YOLO config:
160
 
161
  - `image`: the image
162
- - `objects`: the annotations which consits of:
163
  - `bbox`: a list of bounding boxes for the image
164
  - `label`: a list of labels for this image
165
 
@@ -182,16 +183,48 @@ The fields for the COCO config:
182
 
183
  The dataset contains a train, validation and test split with the following numbers per split:
184
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
185
 
186
- | | train | validation | test |
187
- |----------|-------|------------|------|
188
- | examples | 196 | 22 | 135 |
189
 
190
 
191
  ## Dataset Creation
192
 
193
- > [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The test set is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
194
- .
 
 
 
 
 
 
 
 
 
 
 
195
  ### Curation Rationale
196
 
197
 
@@ -200,6 +233,8 @@ The dataset contains a train, validation and test split with the following numbe
200
 
201
  ### Source Data
202
 
 
 
203
  #### Initial Data Collection and Normalization
204
 
205
  [More information needed]
@@ -211,12 +246,9 @@ The dataset contains a train, validation and test split with the following numbe
211
 
212
  ### Annotations
213
 
214
- [More information needed]
215
-
216
-
217
  #### Annotation process
218
 
219
- [More information needed]
220
 
221
  #### Who are the annotators?
222
 
 
74
  - TableZone
75
  - TitlePageZone
76
 
77
+
78
  ### Supported Tasks and Leaderboards
79
 
80
  - `object-detection`: This dataset can be used to train a model for object-detection on historic document images.
 
84
 
85
  This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.
86
 
87
+ - The first configuration, `YOLO`, uses the data's original format.
88
+ - The second configuration converts the YOLO format into a format closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor` from the `Transformers` models for object detection, which expect data to be in a COCO style format.
89
 
90
  ### Data Instances
91
 
92
  An example instance from the COCO config:
93
 
94
+ ```python
95
  {'height': 5610,
96
  'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785609D0>,
97
  'image_id': 0,
 
142
 
143
  An example instance from the YOLO config:
144
 
145
+ ```python
146
  {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785EFA90>,
147
  'objects': {'bbox': [[2144, 292, 1198, 170],
148
  [1120, 1462, 414, 331],
 
160
  The fields for the YOLO config:
161
 
162
  - `image`: the image
163
+ - `objects`: the annotations which consist of:
164
  - `bbox`: a list of bounding boxes for the image
165
  - `label`: a list of labels for this image
166
 
 
183
 
184
  The dataset contains a train, validation and test split with the following numbers per split:
185
 
186
+ | Dataset | Number of images |
187
+ |---------|------------------|
188
+ | Train | 854 |
189
+ | Dev | 154 |
190
+ | Test | 139 |
191
+
192
+ A more detailed summary of the dataset (copied from the paper):
193
+
194
+
195
+ | | Train | Dev | Test | Total | Average area | Median area |
196
+ |--------------------------|------:|----:|-----:|------:|-------------:|------------:|
197
+ | DropCapitalZone | 1537 | 180 | 222 | 1939 | 0.45 | 0.26 |
198
+ | MainZone | 1408 | 253 | 258 | 1919 | 28.86 | 26.43 |
199
+ | NumberingZone | 421 | 57 | 76 | 554 | 0.18 | 0.14 |
200
+ | MarginTextZone | 396 | 59 | 49 | 504 | 1.19 | 0.52 |
201
+ | GraphicZone | 289 | 54 | 50 | 393 | 8.56 | 4.31 |
202
+ | MusicZone | 237 | 71 | 0 | 308 | 1.22 | 1.09 |
203
+ | RunningTitleZone | 137 | 25 | 18 | 180 | 0.95 | 0.84 |
204
+ | QuireMarksZone | 65 | 18 | 9 | 92 | 0.25 | 0.21 |
205
+ | StampZone | 85 | 5 | 1 | 91 | 1.69 | 1.14 |
206
+ | DigitizationArtefactZone | 1 | 0 | 32 | 33 | 2.89 | 2.79 |
207
+ | DamageZone | 6 | 1 | 14 | 21 | 1.50 | 0.02 |
208
+ | TitlePageZone | 4 | 0 | 1 | 5 | 48.27 | 63.39 |
209
+
210
 
 
 
 
211
 
212
 
213
  ## Dataset Creation
214
 
215
+ This dataset is derived from:
216
+
217
+ - CREMMA Medieval ( Pinche, A. (2022). Cremma Medieval (Version Bicerin 1.1.0) [Data set](https://github.com/HTR-United/cremma-medieval)
218
+ - CREMMA Medieval Lat (Clérice, T. and Vlachou-Efstathiou, M. (2022). Cremma Medieval Latin [Data set](https://github.com/HTR-United/cremma-medieval-lat)
219
+ - Eutyches. (Vlachou-Efstathiou, M. Voss.Lat.O.41 - Eutyches "de uerbo" glossed [Data set](https://github.com/malamatenia/Eutyches)
220
+ - Gallicorpora HTR-Incunable-15e-Siecle ( Pinche, A., Gabay, S., Leroy, N., & Christensen, K. Données HTR incunable du 15e siècle [Computer software](https://github.com/Gallicorpora/HTR-incunable-15e-siecle)
221
+ - Gallicorpora HTR-MSS-15e-Siecle ( Pinche, A., Gabay, S., Leroy, N., & Christensen, K. Données HTR manuscrits du 15e siècle [Computer software](https://github.com/Gallicorpora/HTR-MSS-15e-Siecle)
222
+ - Gallicorpora HTR-imprime-gothique-16e-siecle ( Pinche, A., Gabay, S., Vlachou-Efstathiou, M., & Christensen, K. HTR-imprime-gothique-16e-siecle [Computer software](https://github.com/Gallicorpora/HTR-imprime-gothique-16e-siecle)
223
+
224
+ + a few hundred newly annotated data, specifically the test set which is completely novel and based on early prints and manuscripts.
225
+
226
+ These additional annotations were created by correcting an early version of the model developed in the paper using the [roboflow](https://roboflow.com/) platform.
227
+
228
  ### Curation Rationale
229
 
230
 
 
233
 
234
  ### Source Data
235
 
236
+ The sources of the data are described above.
237
+
238
  #### Initial Data Collection and Normalization
239
 
240
  [More information needed]
 
246
 
247
  ### Annotations
248
 
 
 
 
249
  #### Annotation process
250
 
251
+ Additional annotations produced for this dataset were created by correcting an early version of the model developed in the paper using the [roboflow](https://roboflow.com/) platform.
252
 
253
  #### Who are the annotators?
254