nie3e's picture
Adds evaluation on another dataset.
5b407d9 verified
metadata
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: sentiment-polish-gpt2-small
    results:
      - task:
          type: text-classification
        dataset:
          type: allegro/klej-polemo2-out
          name: klej-polemo2-out
        metrics:
          - type: accuracy
            value: 98.38%
license: mit
language:
  - pl
datasets:
  - clarin-pl/polemo2-official

sentiment-polish-gpt2-small

This model was trained from polish-gpt2-small on the polemo2-official dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4659
  • Accuracy: 0.9627

Model description

Trained from polish-gpt2-small

Intended uses & limitations

Sentiment analysis - neutral/negative/positive/ambiguous

Training and evaluation data

Merged all rows from polemo2-official dataset.

Train/test split: 80%/20%

Datacollator:

from transformers import DataCollatorWithPadding
data_collator = DataCollatorWithPadding(
  tokenizer=tokenizer,
  padding="longest",
  max_length=128,
  pad_to_multiple_of=8
)

Training procedure

GPU: RTX 3090

Training time: 2:53:05

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.4049 1.0 3284 0.3351 0.8792
0.1885 2.0 6568 0.2625 0.9218
0.1182 3.0 9852 0.2583 0.9419
0.0825 4.0 13136 0.2886 0.9482
0.0586 5.0 16420 0.3343 0.9538
0.034 6.0 19704 0.3734 0.9595
0.0288 7.0 22988 0.4125 0.9599
0.0185 8.0 26273 0.4262 0.9626
0.0069 9.0 29557 0.4529 0.9622
0.0059 10.0 32840 0.4659 0.9627

Evaluation

Evaluated on allegro/klej-polemo2-out test dataset.

from datasets import load_dataset
from evaluate import evaluator

data = load_dataset("allegro/klej-polemo2-out", split="test").shuffle(seed=42)
task_evaluator = evaluator("text-classification")

# fix labels
l = {
        "__label__meta_zero": 0,
        "__label__meta_minus_m": 1,
        "__label__meta_plus_m": 2,
        "__label__meta_amb": 3
    }
def fix_labels(examples):
    examples["target"] = l[examples["target"]]
    return examples
data = data.map(fix_labels)

eval_resutls = task_evaluator.compute(
    model_or_pipeline="nie3e/sentiment-polish-gpt2-small",
    data=data,
    label_mapping={"NEUTRAL": 0, "NEGATIVE": 1, "POSITIVE": 2, "AMBIGUOUS": 3},
    input_column="sentence",
    label_column="target"
)

print(eval_resutls)
{
    "accuracy": 0.9838056680161943,
    "total_time_in_seconds": 5.2441766999982065,
    "samples_per_second": 94.1997244296076,
    "latency_in_seconds": 0.010615742307688678
}

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0