Edit model card

Classifier_30k

This model is a fine-tuned version of microsoft/deberta-v3-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1296
  • Accuracy: 0.9876

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.3588 0.9994 831 0.3084 0.9091
0.1252 2.0 1663 0.2260 0.9453
0.1123 2.9994 2494 0.1241 0.9604
0.0896 4.0 3326 0.1372 0.9655
0.0749 4.9994 4157 0.1541 0.9708
0.0743 6.0 4989 0.1127 0.9715
0.0596 6.9994 5820 0.1782 0.9672
0.0494 8.0 6652 0.1352 0.9749
0.0443 8.9994 7483 0.1232 0.9681
0.0405 10.0 8315 0.0756 0.9838
0.0383 10.9994 9146 0.2025 0.9600
0.0361 12.0 9978 0.1130 0.9796
0.0288 12.9994 10809 0.0906 0.9855
0.0249 14.0 11641 0.1122 0.9827
0.0222 14.9994 12472 0.0713 0.9862
0.0239 16.0 13304 0.0552 0.9876
0.0234 16.9994 14135 0.0728 0.9864
0.0258 18.0 14967 0.0558 0.9891
0.0208 18.9994 15798 0.0715 0.9879
0.0199 20.0 16630 0.0753 0.9885
0.0143 20.9994 17461 0.0812 0.9872
0.0255 22.0 18293 0.1661 0.9744
0.0156 22.9994 19124 0.0751 0.9883
0.013 24.0 19956 0.0718 0.9862
0.0126 24.9994 20787 0.0829 0.9853
0.0123 26.0 21619 0.0848 0.9857
0.0109 26.9994 22450 0.0913 0.9864
0.0095 28.0 23282 0.1607 0.9774
0.0096 28.9994 24113 0.0958 0.9853
0.0074 30.0 24945 0.1264 0.9857
0.0091 30.9994 25776 0.1030 0.9881
0.0096 32.0 26608 0.0954 0.9879
0.0074 32.9994 27439 0.1103 0.9885
0.0067 34.0 28271 0.1803 0.9791
0.0044 34.9994 29102 0.1597 0.9817
0.0045 36.0 29934 0.0878 0.9894
0.0034 36.9994 30765 0.1680 0.9806
0.0066 38.0 31597 0.1114 0.9870
0.0041 38.9994 32428 0.0910 0.9896
0.0043 40.0 33260 0.1435 0.9840
0.0037 40.9994 34091 0.1233 0.9881
0.0046 42.0 34923 0.1347 0.9864
0.0029 42.9994 35754 0.1134 0.9883
0.0017 44.0 36586 0.1125 0.9879
0.0025 44.9994 37417 0.1400 0.9859
0.0023 46.0 38249 0.1228 0.9879
0.0017 46.9994 39080 0.1445 0.9862
0.0011 48.0 39912 0.1375 0.9876
0.0013 48.9994 40743 0.1323 0.9876
0.0021 49.9699 41550 0.1296 0.9876

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
12
Safetensors
Model size
435M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Tensorride/Classifier_30k

Finetuned
(116)
this model