Datasets:
SBB
/

sbb-dc-ocr / README.md
davanstrien's picture
davanstrien HF staff
Update README.md
d7ad88c
|
raw
history blame
6.31 kB
---
annotations_creators:
- machine-generated
language:
- de
- nl
- en
- fr
- es
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: Berlin State Library OCR
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- ocr
- library
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# Dataset Card for Berlin State Library OCR data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.
> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.
For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).
### Supported Tasks and Leaderboards
This dataset is useful for training language models on historical/OCR'd text.
### Languages
The collection includes material across a large number of languages. The languages of the OCR text have been detected using [langid.py: An Off-the-shelf Language Identification Tool](https://aclanthology.org/P12-3005) (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. **Note:** not all examples may have been successfully matched to the language prediction table from the original data.
The frequency of the top ten languages in the dataset is shown below:
| | frequency |
|----|------------------|
| de | 3.20963e+06 |
| nl | 491322 |
| en | 473496 |
| fr | 216210 |
| es | 68869 |
| lb | 33625 |
| la | 27397 |
| pl | 17458 |
| it | 16012 |
| zh | 11971 |
[More Information Needed]
## Dataset Structure
### Data Instances
Each example represents a single page of OCR'd text.
A single example of the dataset is as follows:
```python
{'file name': '00000045.xml',
'language': 'fr',
'language_confidence': 0.9999999999910871,
'ppn': '646426230',
'text': 'Fig. 156 Tirant les sorts au moyen de la divination de Wen-wang',
'wc': [0.6125000119,
0.4799999893,
0.7916666865,
0.8066666722,
0.7720000148,
0.5849999785,
0.7580000162,
0.9200000167,
0.6449999809,
0.6060000062,
0.6549999714,
0.6362500191]}
```
### Data Fields
- 'file name': filename of the original XML file
- 'text': OCR'd text for that page of the item
- 'wc': the word confidence for each token predicted by the OCR engine
- 'ppn': 'Pica production numbers' an internal ID used by the library. See [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2702544.svg)](https://doi.org/10.5281/zenodo.2702544) for more details.
'language': language predicted by `langid.py` (see above for more details)
-'language_confidence': confidence score given by `langid.py`
[More Information Needed]
### Data Splits
This dataset contains only a single split `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
This dataset contains machine-produced annotations for:
- the confidence scores the OCR engines used to produce the full-text materials.
- the predicted languages and associated confidence scores produced by `langid.py`
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
This dataset contains historical material, which may include names, addresses etc, but these are not likely to refer to living individuals.
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Labusch, Kai; Zellhöfer, David
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{labusch_kai_2019_3257041,
author = {Labusch, Kai and
Zellhöfer, David},
title = {{OCR fulltexts of the Digital Collections of the
Berlin State Library (DC-SBB)}},
month = jun,
year = 2019,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3257041},
url = {https://doi.org/10.5281/zenodo.3257041}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.