bcms-bertic-parlasent-bcs-ter

Ternary text classification model based on classla/bcms-bertic and fine-tuned on the BCS Political Sentiment dataset (sentence-level data).

This classifier classifies text into only three categories: Negative, Neutral, and Positive. For the binary classifier (Negative, Other) check this model.

For details on the dataset and the finetuning procedure, please see this paper.

Fine-tuning hyperparameters

Fine-tuning was performed with simpletransformers. Beforehand a brief sweep for the optimal number of epochs was performed and the presumed best value was 9. Other arguments were kept default.


model_args = {
        "num_train_epochs": 9
        }

Performance

The same pipeline was run with two other transformer models and fasttext for comparison. Macro F1 scores were recorded for each of the 6 fine-tuning sessions and post festum analyzed.

model average macro F1
bcms-bertic-parlasent-bcs-ter 0.7941 ± 0.0101 **
EMBEDDIA/crosloengual-bert 0.7709 ± 0.0113
xlm-roberta-base 0.7184 ± 0.0139
fasttext + CLARIN.si embeddings 0.6312 ± 0.0043

Two best performing models have been compared with the Mann-Whitney U test to calculate p-values (** denotes p<0.01).

Use example with simpletransformers==0.63.7

from simpletransformers.classification import ClassificationModel

model = ClassificationModel("electra", "classla/bcms-bertic-parlasent-bcs-ter")

predictions, logits = model.predict([
    "Vi niste normalni",
    "Đački autobusi moraju da voze svaki dan",
    "Ovo je najbolji zakon na svetu",
     ]
    )

predictions
# Output: array([0, 1, 2])

[model.config.id2label[i] for i in predictions]
# Output: ['Negative', 'Neutral', 'Positive']

Citation

If you use the model, please cite the following paper on which the original model is based:

@inproceedings{ljubesic-lauc-2021-bertic,
    title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
    author = "Ljube{\v{s}}i{\'c}, Nikola  and Lauc, Davor",
    booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
    month = apr,
    year = "2021",
    address = "Kiyv, Ukraine",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
    pages = "37--42",
}

and the paper describing the dataset and methods for the current finetuning:

@misc{https://doi.org/10.48550/arxiv.2206.00929,
  doi = {10.48550/ARXIV.2206.00929},
  
  url = {https://arxiv.org/abs/2206.00929},
  
  author = {Mochtak, Michal and Rupnik, Peter and Ljubešič, Nikola},
  
  keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates from Bosnia-Herzegovina, Croatia, and Serbia},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
Downloads last month
20
Safetensors
Model size
111M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for classla/bcms-bertic-parlasent-bcs-ter

Adapters
3 models