XLM-RoBERTa-PEA-relevance-de

Model description

XLM-RoBERTa-PEA-relevance-de is a finetuned model baseed on XLM-RoBERTa for the binary task of discriminating between relevant and not relevant newspaper articles containing protest-related keywords. The model has been finetuned with 3972 German manually annotated newspaper articles (2224 positive and 1748 negative cases).

Intended uses & limitations

The model is intended to filter between relevant and not relevant articles in the first step of a protest event analysis (PEA) pipeline. Despite beeing finetuned with German data, only, it also performs well in other languages (tested for English and Hungarian).

Usage

You can use this model with a pipeline for binary teyt classification

BibTeX entry and citation info

 @inproceedings{Wiedemann_Dollbaum_Haunss_Daphi_Meier_2022,
 author    = {Wiedemann, Gregor and 
              Dollbaum, Jan Matti and
              Haunss, Sebastian and 
              Daphi, Priska and
              Meier, Larissa Daria},
 title     = {A Generalizing Approach to Protest Event Detection in German Local News},
 url       = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.413.pdf},
 booktitle = {Proceedings of the 13th Conference on Language Resources and Evaluation},
 year      = {2022},
 address   = {Marseille},
 pages     = {3883–3891} }

Downloads last month
16
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for shaunss/xlmroberta-pea-relevance-de

Finetuned
(346)
this model