ner_archive_pt / README.md
lfcc's picture
Update README.md
a364094
metadata
task_categories:
  - token-classification
language:
  - pt
size_categories:
  - 100K<n<1M

Dataset

This dataset was created by consolidating information from various Portuguese Archives. We gathered data from these archives and subsequently performed manual annotation of each harvested corpus with Named Entities such as Person, Place, Date, Profession and Organization. The resulting dataset was formed by merging all the individual corpora into a unified corpus which we named "ner-archive-pt" and can be accessed at: http://ner.epl.di.uminho.pt/

Citation


@Article{make4010003,
AUTHOR = {Cunha, Luís Filipe and Ramalho, José Carlos},
TITLE = {NER in Archival Finding Aids: Extended},
JOURNAL = {Machine Learning and Knowledge Extraction},
VOLUME = {4},
YEAR = {2022},
NUMBER = {1},
PAGES = {42--65},
URL = {https://www.mdpi.com/2504-4990/4/1/3},
ISSN = {2504-4990},
ABSTRACT = {The amount of information preserved in Portuguese archives has increased over the years. These documents represent a national heritage of high importance, as they portray the country&rsquo;s history. Currently, most Portuguese archives have made their finding aids available to the public in digital format, however, these data do not have any annotation, so it is not always easy to analyze their content. In this work, Named Entity Recognition solutions were created that allow the identification and classification of several named entities from the archival finding aids. These named entities translate into crucial information about their context and, with high confidence results, they can be used for several purposes, for example, the creation of smart browsing tools by using entity linking and record linking techniques. In order to achieve high result scores, we annotated several corpora to train our own Machine Learning algorithms in this context domain. We also used different architectures, such as CNNs, LSTMs, and Maximum Entropy models. Finally, all the created datasets and ML models were made available to the public with a developed web platform, NER@DI.},
DOI = {10.3390/make4010003}
}