Edit model card

Polemo Intensity Model

This model is fine-tuned for emotion intensity detection in Polish political texts. It is based on the RoBERTa transformer model (https://github.com/sdadas/polish-roberta) and has been specifically trained to recognize a range of emotions in text data scraped from various social media platforms. The model returns 0 to 1 continuous scores for eight emotional metrics: Happiness, Sadness, Anger, Disgust, Fear, Pride, Valence, and Arousal.

Model Details

Database Preparation

Our research utilizes a comprehensive database of Polish political texts from social media profiles (i.e., YouTube, Twitter, Facebook) of 25 journalists, 25 politicians, and 19 non-governmental organizations (NGOs). For each profile, all available posts from each platform were scraped going back to the beginning of 2019. In addition, we included texts written by non-professional commentators of social affairs. Our dataset consists of 1,246,337 text snippets:

  • Twitter: 789,490 tweets
  • YouTube: 42,252 comments
  • Facebook: 414,595 posts

The texts were processed to fit transformer models' length constraints. Facebook texts were split into sentences, and all texts longer than 280 characters were removed. Non-Polish texts were filtered out using the langdetect software, and all online links and usernames were replaced with placeholders. We focused on texts with higher emotional content for training, which we have filtered using a lexicon approach, resulting in a final dataset of 10,000 texts, annotated by 20 expert annotators.

Annotation Process

The final dataset was annotated for the following emotions:

  • Happiness
  • Sadness
  • Anger
  • Disgust
  • Fear
  • Pride
  • Valence
  • Arousal

Annotators used a 5-point scale for each emotion and dimension. The annotation process ensured consistency and minimized subjectivity, with each text being annotated by five different annotators.

Model Training

We considered two base models: the Trelbert transformer model and the Polish Roberta model. The final model relied on the Roberta transformer model and was fine-tuned using a Bayesian grid search for hyperparameter optimization. The training involved:

  • Dropout: 0.6
  • Learning rate: 5e-5
  • Weight decay: 0.3
  • Warmup steps: 600

Results

The model demonstrated strong correlations with human ratings, particularly in predicting happiness and valence, achieving correlations of 0.87. The results for other emotions were also substantial, indicating the model's ability to capture a wide range of emotional states.

K-Fold Validation

A 10-fold cross-validation showed high reliability across different emotional dimensions:

  • Happiness: 0.83
  • Sadness: 0.68
  • Anger: 0.81
  • Disgust: 0.75
  • Fear: 0.67
  • Pride: 0.76
  • Valence: 0.84
  • Arousal: 0.71

Usage

You can use the model and tokenizer as follows:

First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups.

git clone https://huggingface.co./hplisiecki/polemo_intensity

Proceed as follows:

from polemo_intensity.model_script import Model # importing the custom model class
from transformers import AutoTokenizer

model_directory = "polemo_intensity" # path to the cloned repository
model = Model.from_pretrained(model_directory)
tokenizer = AutoTokenizer.from_pretrained(model_directory)
inputs = tokenizer("This is a test input.", return_tensors="pt")
outputs = model(inputs['input_ids'], inputs['attention_mask'])

# Print out the emotion ratings
for emotion, rating in zip(['Happiness', 'Sadness', 'Anger', 'Disgust', 'Fear', 'Pride', 'Valence', 'Arousal'], outputs):
    print(f"{emotion}: {rating.item()}")

Team

The model was created at the Digital Social Sciences Research Laboratory at IFIS PAN https://lab.ifispan.edu.pl

Downloads last month
42
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.