Edit model card

arabert_cross_development_task1_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7152
  • Qwk: 0.1109
  • Mse: 0.7153

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 5.0133 -0.0040 5.0069
No log 0.25 4 1.7271 0.0220 1.7233
No log 0.375 6 0.6069 0.0575 0.6049
No log 0.5 8 0.6108 0.0838 0.6102
No log 0.625 10 0.6811 -0.0132 0.6805
No log 0.75 12 0.6094 0.1119 0.6084
No log 0.875 14 0.6389 0.0138 0.6386
No log 1.0 16 0.6607 -0.0302 0.6606
No log 1.125 18 0.8561 0.0 0.8561
No log 1.25 20 0.8772 0.0192 0.8774
No log 1.375 22 0.6488 0.1233 0.6484
No log 1.5 24 0.6095 0.0553 0.6090
No log 1.625 26 0.8548 0.0249 0.8549
No log 1.75 28 0.8236 0.0 0.8237
No log 1.875 30 0.6385 0.0263 0.6382
No log 2.0 32 0.6856 -0.0056 0.6854
No log 2.125 34 0.7166 0.0 0.7165
No log 2.25 36 0.6505 0.0135 0.6503
No log 2.375 38 0.6867 0.0 0.6865
No log 2.5 40 0.7602 0.0 0.7602
No log 2.625 42 0.7896 0.0 0.7897
No log 2.75 44 0.8485 0.0 0.8487
No log 2.875 46 0.8089 0.0 0.8090
No log 3.0 48 0.8838 0.0 0.8841
No log 3.125 50 1.0443 0.0 1.0448
No log 3.25 52 1.2312 0.0 1.2319
No log 3.375 54 1.2836 0.0 1.2843
No log 3.5 56 1.0837 0.0 1.0842
No log 3.625 58 0.9529 0.0 0.9532
No log 3.75 60 0.9030 0.0 0.9032
No log 3.875 62 0.9117 0.0 0.9119
No log 4.0 64 0.8372 0.0 0.8373
No log 4.125 66 0.8288 0.0 0.8289
No log 4.25 68 0.7667 0.0 0.7667
No log 4.375 70 0.8226 0.0 0.8228
No log 4.5 72 0.9046 0.0 0.9050
No log 4.625 74 1.1055 0.0 1.1061
No log 4.75 76 1.1209 0.0 1.1216
No log 4.875 78 0.9195 0.0 0.9199
No log 5.0 80 0.7954 0.0554 0.7956
No log 5.125 82 0.8199 0.0 0.8203
No log 5.25 84 0.6997 0.1731 0.6999
No log 5.375 86 0.6765 0.1731 0.6766
No log 5.5 88 0.8095 0.0554 0.8099
No log 5.625 90 0.7989 0.0554 0.7992
No log 5.75 92 0.7088 0.1731 0.7089
No log 5.875 94 0.8201 0.0554 0.8205
No log 6.0 96 1.0195 0.0 1.0201
No log 6.125 98 1.0328 0.0 1.0335
No log 6.25 100 0.9344 0.0 0.9349
No log 6.375 102 0.7485 0.0906 0.7487
No log 6.5 104 0.6810 0.1387 0.6811
No log 6.625 106 0.7638 0.0672 0.7640
No log 6.75 108 0.9339 0.0 0.9342
No log 6.875 110 0.9824 0.0 0.9828
No log 7.0 112 0.9198 0.0 0.9202
No log 7.125 114 0.8078 0.0554 0.8080
No log 7.25 116 0.6480 0.1412 0.6480
No log 7.375 118 0.6108 0.1780 0.6107
No log 7.5 120 0.6581 0.1109 0.6581
No log 7.625 122 0.7813 0.0554 0.7815
No log 7.75 124 0.8870 0.0 0.8873
No log 7.875 126 0.9244 0.0 0.9247
No log 8.0 128 0.9045 0.0554 0.9048
No log 8.125 130 0.8522 0.0554 0.8524
No log 8.25 132 0.8434 0.0554 0.8436
No log 8.375 134 0.8530 0.0554 0.8532
No log 8.5 136 0.8770 0.0554 0.8772
No log 8.625 138 0.8451 0.0554 0.8453
No log 8.75 140 0.8325 0.0554 0.8327
No log 8.875 142 0.8378 0.0554 0.8380
No log 9.0 144 0.8214 0.0554 0.8216
No log 9.125 146 0.7888 0.0554 0.7890
No log 9.25 148 0.7536 0.0376 0.7537
No log 9.375 150 0.7389 0.0612 0.7390
No log 9.5 152 0.7209 0.0844 0.7210
No log 9.625 154 0.7172 0.1232 0.7173
No log 9.75 156 0.7125 0.1109 0.7126
No log 9.875 158 0.7149 0.1109 0.7150
No log 10.0 160 0.7152 0.1109 0.7153

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_development_task1_fold1

Finetuned
(438)
this model