nlpllm007's picture
End of training
e0145e4 verified
metadata
license: apache-2.0
base_model: distilbert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: results_classification
    results: []

results_classification

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2517
  • Accuracy: 0.9214
  • F1: 0.9214

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.152 0.0133 50 0.3216 0.9037 0.9033
0.1533 0.0267 100 0.3024 0.9096 0.9095
0.1443 0.04 150 0.3356 0.9017 0.9010
0.1101 0.0533 200 0.3121 0.9134 0.9133
0.1147 0.0667 250 0.3813 0.9005 0.9002
0.1611 0.08 300 0.2992 0.9134 0.9129
0.1553 0.0933 350 0.2858 0.9166 0.9166
0.1268 0.1067 400 0.2769 0.9186 0.9185
0.2011 0.12 450 0.2525 0.9214 0.9215
0.1845 0.1333 500 0.2517 0.9214 0.9214

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1