scenario_4 / README.md
jordan2889's picture
jordan2889/scenario_4
78be4f2 verified
|
raw
history blame
No virus
4.16 kB
metadata
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: scenario_4
    results: []

scenario_4

This model is a fine-tuned version of microsoft/mdeberta-v3-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1764
  • Accuracy: 0.9704
  • F1: 0.9704
  • Precision: 0.9710
  • Recall: 0.9704
  • Accuracy Label Test: 0.9879
  • Accuracy Label Train: 0.9536

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall Accuracy Label Test Accuracy Label Train
0.5515 0.1579 100 0.5194 0.7579 0.7415 0.8352 0.7579 0.5070 0.9992
0.2205 0.3157 200 0.2537 0.9300 0.9298 0.9361 0.9300 0.9883 0.8739
0.1106 0.4736 300 0.3450 0.9129 0.9124 0.9248 0.9129 0.9960 0.8329
0.0384 0.6314 400 0.1408 0.9683 0.9683 0.9687 0.9683 0.9835 0.9536
0.0631 0.7893 500 0.1517 0.9631 0.9631 0.9645 0.9631 0.9895 0.9377
0.0276 0.9471 600 0.3649 0.9387 0.9386 0.9444 0.9387 0.9948 0.8847
0.0245 1.1050 700 0.1339 0.9702 0.9702 0.9702 0.9702 0.9727 0.9679
0.0519 1.2628 800 0.4945 0.9186 0.9182 0.9299 0.9186 0.9992 0.8410
0.02 1.4207 900 0.2637 0.9549 0.9548 0.9580 0.9549 0.9960 0.9153
0.0325 1.5785 1000 0.1165 0.9708 0.9708 0.9712 0.9708 0.9851 0.9571
0.016 1.7364 1100 0.1007 0.9692 0.9692 0.9697 0.9692 0.9530 0.9849
0.0068 1.8942 1200 0.1679 0.9690 0.9690 0.9697 0.9690 0.9871 0.9516
0.0042 2.0521 1300 0.1182 0.9734 0.9734 0.9734 0.9734 0.9723 0.9745
0.0005 2.2099 1400 0.1432 0.9730 0.9730 0.9731 0.9730 0.9799 0.9663
0.0182 2.3678 1500 0.1460 0.9718 0.9718 0.9723 0.9718 0.9871 0.9571
0.0004 2.5257 1600 0.1383 0.9732 0.9732 0.9734 0.9732 0.9843 0.9625
0.0003 2.6835 1700 0.1381 0.9744 0.9744 0.9745 0.9744 0.9831 0.9660
0.0002 2.8414 1800 0.1599 0.9724 0.9724 0.9728 0.9724 0.9863 0.9590

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.3.1
  • Datasets 2.20.0
  • Tokenizers 0.19.1