arabert_cross_organization_task2_fold1
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9665
- Qwk: 0.1843
- Mse: 0.9638
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.125 | 2 | 4.9621 | 0.0014 | 4.9593 |
No log | 0.25 | 4 | 2.1315 | -0.0199 | 2.1286 |
No log | 0.375 | 6 | 0.9548 | 0.0027 | 0.9513 |
No log | 0.5 | 8 | 0.8530 | 0.0678 | 0.8507 |
No log | 0.625 | 10 | 0.7928 | 0.1306 | 0.7900 |
No log | 0.75 | 12 | 0.7710 | 0.1552 | 0.7682 |
No log | 0.875 | 14 | 0.8786 | -0.0670 | 0.8766 |
No log | 1.0 | 16 | 0.9254 | -0.0565 | 0.9242 |
No log | 1.125 | 18 | 1.0088 | -0.0232 | 1.0068 |
No log | 1.25 | 20 | 0.8151 | 0.1520 | 0.8123 |
No log | 1.375 | 22 | 0.7943 | 0.0808 | 0.7913 |
No log | 1.5 | 24 | 0.8986 | 0.0767 | 0.8958 |
No log | 1.625 | 26 | 0.9046 | 0.0935 | 0.9018 |
No log | 1.75 | 28 | 0.8005 | 0.0962 | 0.7975 |
No log | 1.875 | 30 | 0.8246 | 0.1261 | 0.8217 |
No log | 2.0 | 32 | 1.0455 | 0.0565 | 1.0427 |
No log | 2.125 | 34 | 1.0848 | 0.0536 | 1.0819 |
No log | 2.25 | 36 | 0.8432 | 0.1284 | 0.8401 |
No log | 2.375 | 38 | 0.8148 | 0.1252 | 0.8116 |
No log | 2.5 | 40 | 0.9950 | 0.0565 | 0.9919 |
No log | 2.625 | 42 | 1.3659 | 0.0 | 1.3632 |
No log | 2.75 | 44 | 1.3473 | 0.0 | 1.3447 |
No log | 2.875 | 46 | 1.0770 | 0.0536 | 1.0742 |
No log | 3.0 | 48 | 0.9980 | 0.1016 | 0.9952 |
No log | 3.125 | 50 | 1.2032 | 0.0 | 1.2005 |
No log | 3.25 | 52 | 1.5241 | 0.0 | 1.5217 |
No log | 3.375 | 54 | 1.5210 | 0.0231 | 1.5187 |
No log | 3.5 | 56 | 1.3342 | 0.0 | 1.3319 |
No log | 3.625 | 58 | 1.1036 | 0.0 | 1.1011 |
No log | 3.75 | 60 | 1.0121 | 0.0536 | 1.0095 |
No log | 3.875 | 62 | 1.1441 | 0.0 | 1.1415 |
No log | 4.0 | 64 | 1.3639 | 0.0 | 1.3614 |
No log | 4.125 | 66 | 1.3735 | 0.0025 | 1.3709 |
No log | 4.25 | 68 | 1.0825 | 0.0 | 1.0799 |
No log | 4.375 | 70 | 0.9120 | 0.1641 | 0.9093 |
No log | 4.5 | 72 | 0.9506 | 0.1277 | 0.9480 |
No log | 4.625 | 74 | 1.1517 | 0.0 | 1.1492 |
No log | 4.75 | 76 | 1.2276 | 0.0 | 1.2251 |
No log | 4.875 | 78 | 1.1183 | 0.0 | 1.1159 |
No log | 5.0 | 80 | 1.0068 | 0.0333 | 1.0043 |
No log | 5.125 | 82 | 1.0353 | 0.0360 | 1.0328 |
No log | 5.25 | 84 | 1.2149 | 0.0025 | 1.2124 |
No log | 5.375 | 86 | 1.3871 | -0.0077 | 1.3847 |
No log | 5.5 | 88 | 1.3159 | -0.0052 | 1.3135 |
No log | 5.625 | 90 | 1.1412 | 0.0600 | 1.1386 |
No log | 5.75 | 92 | 1.1161 | 0.0755 | 1.1135 |
No log | 5.875 | 94 | 1.0471 | 0.1059 | 1.0444 |
No log | 6.0 | 96 | 0.9768 | 0.1096 | 0.9740 |
No log | 6.125 | 98 | 0.8671 | 0.2312 | 0.8642 |
No log | 6.25 | 100 | 0.8025 | 0.1206 | 0.7994 |
No log | 6.375 | 102 | 0.8027 | 0.1414 | 0.7995 |
No log | 6.5 | 104 | 0.8550 | 0.1743 | 0.8519 |
No log | 6.625 | 106 | 0.9815 | 0.0889 | 0.9787 |
No log | 6.75 | 108 | 1.0738 | 0.0846 | 1.0710 |
No log | 6.875 | 110 | 1.1903 | 0.1144 | 1.1876 |
No log | 7.0 | 112 | 1.2101 | 0.1122 | 1.2074 |
No log | 7.125 | 114 | 1.2463 | 0.0812 | 1.2436 |
No log | 7.25 | 116 | 1.3137 | 0.0117 | 1.3110 |
No log | 7.375 | 118 | 1.3237 | -0.0052 | 1.3210 |
No log | 7.5 | 120 | 1.2538 | 0.0649 | 1.2512 |
No log | 7.625 | 122 | 1.2188 | 0.1004 | 1.2162 |
No log | 7.75 | 124 | 1.1146 | 0.1281 | 1.1120 |
No log | 7.875 | 126 | 1.0332 | 0.1480 | 1.0306 |
No log | 8.0 | 128 | 0.9680 | 0.1644 | 0.9653 |
No log | 8.125 | 130 | 0.9608 | 0.1792 | 0.9581 |
No log | 8.25 | 132 | 0.9993 | 0.1681 | 0.9966 |
No log | 8.375 | 134 | 1.0373 | 0.0889 | 1.0347 |
No log | 8.5 | 136 | 1.0747 | 0.1177 | 1.0721 |
No log | 8.625 | 138 | 1.0865 | 0.1030 | 1.0839 |
No log | 8.75 | 140 | 1.0732 | 0.1486 | 1.0707 |
No log | 8.875 | 142 | 1.0424 | 0.1505 | 1.0398 |
No log | 9.0 | 144 | 1.0425 | 0.1505 | 1.0399 |
No log | 9.125 | 146 | 1.0220 | 0.1587 | 1.0194 |
No log | 9.25 | 148 | 1.0055 | 0.2009 | 1.0029 |
No log | 9.375 | 150 | 0.9949 | 0.1941 | 0.9923 |
No log | 9.5 | 152 | 0.9725 | 0.1875 | 0.9698 |
No log | 9.625 | 154 | 0.9600 | 0.2145 | 0.9574 |
No log | 9.75 | 156 | 0.9593 | 0.2145 | 0.9566 |
No log | 9.875 | 158 | 0.9660 | 0.1843 | 0.9634 |
No log | 10.0 | 160 | 0.9665 | 0.1843 | 0.9638 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Model tree for salbatarni/arabert_cross_organization_task2_fold1
Base model
aubmindlab/bert-base-arabertv02