arabert_cross_organization_task5_fold1
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9408
- Qwk: 0.3679
- Mse: 0.9408
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.1333 | 2 | 5.0062 | -0.0036 | 5.0062 |
No log | 0.2667 | 4 | 2.2755 | 0.0402 | 2.2755 |
No log | 0.4 | 6 | 0.9209 | 0.1375 | 0.9209 |
No log | 0.5333 | 8 | 0.7216 | 0.2666 | 0.7216 |
No log | 0.6667 | 10 | 2.2848 | 0.1146 | 2.2848 |
No log | 0.8 | 12 | 2.4339 | 0.1213 | 2.4339 |
No log | 0.9333 | 14 | 0.7615 | 0.3777 | 0.7615 |
No log | 1.0667 | 16 | 0.6179 | 0.5173 | 0.6179 |
No log | 1.2 | 18 | 0.6800 | 0.3857 | 0.6800 |
No log | 1.3333 | 20 | 1.1833 | 0.3011 | 1.1833 |
No log | 1.4667 | 22 | 1.1609 | 0.3150 | 1.1609 |
No log | 1.6 | 24 | 0.6395 | 0.4537 | 0.6395 |
No log | 1.7333 | 26 | 0.5534 | 0.5065 | 0.5534 |
No log | 1.8667 | 28 | 0.6236 | 0.4420 | 0.6236 |
No log | 2.0 | 30 | 0.6228 | 0.4306 | 0.6228 |
No log | 2.1333 | 32 | 0.5518 | 0.4931 | 0.5518 |
No log | 2.2667 | 34 | 0.6084 | 0.4242 | 0.6084 |
No log | 2.4 | 36 | 0.7055 | 0.3694 | 0.7055 |
No log | 2.5333 | 38 | 0.7175 | 0.3935 | 0.7175 |
No log | 2.6667 | 40 | 0.6114 | 0.4689 | 0.6114 |
No log | 2.8 | 42 | 0.7045 | 0.4030 | 0.7045 |
No log | 2.9333 | 44 | 0.9915 | 0.3267 | 0.9915 |
No log | 3.0667 | 46 | 1.0131 | 0.3023 | 1.0131 |
No log | 3.2 | 48 | 0.7062 | 0.3732 | 0.7062 |
No log | 3.3333 | 50 | 0.5236 | 0.5208 | 0.5236 |
No log | 3.4667 | 52 | 0.5249 | 0.5718 | 0.5249 |
No log | 3.6 | 54 | 0.5782 | 0.4935 | 0.5782 |
No log | 3.7333 | 56 | 0.8300 | 0.4331 | 0.8300 |
No log | 3.8667 | 58 | 1.0062 | 0.3503 | 1.0062 |
No log | 4.0 | 60 | 0.8152 | 0.3857 | 0.8152 |
No log | 4.1333 | 62 | 0.5614 | 0.4616 | 0.5614 |
No log | 4.2667 | 64 | 0.5056 | 0.5013 | 0.5056 |
No log | 4.4 | 66 | 0.5357 | 0.4729 | 0.5357 |
No log | 4.5333 | 68 | 0.6479 | 0.4155 | 0.6479 |
No log | 4.6667 | 70 | 0.8469 | 0.3777 | 0.8469 |
No log | 4.8 | 72 | 0.8226 | 0.3870 | 0.8226 |
No log | 4.9333 | 74 | 0.7370 | 0.4225 | 0.7370 |
No log | 5.0667 | 76 | 0.7636 | 0.4313 | 0.7636 |
No log | 5.2 | 78 | 0.8153 | 0.4219 | 0.8153 |
No log | 5.3333 | 80 | 0.8004 | 0.4026 | 0.8004 |
No log | 5.4667 | 82 | 0.7252 | 0.4152 | 0.7252 |
No log | 5.6 | 84 | 0.7142 | 0.4140 | 0.7142 |
No log | 5.7333 | 86 | 0.7743 | 0.3924 | 0.7743 |
No log | 5.8667 | 88 | 0.6869 | 0.4122 | 0.6869 |
No log | 6.0 | 90 | 0.6812 | 0.4211 | 0.6812 |
No log | 6.1333 | 92 | 0.7593 | 0.4027 | 0.7593 |
No log | 6.2667 | 94 | 0.9111 | 0.3581 | 0.9111 |
No log | 6.4 | 96 | 0.9395 | 0.3469 | 0.9395 |
No log | 6.5333 | 98 | 0.7996 | 0.3869 | 0.7996 |
No log | 6.6667 | 100 | 0.6895 | 0.4088 | 0.6895 |
No log | 6.8 | 102 | 0.6709 | 0.4175 | 0.6709 |
No log | 6.9333 | 104 | 0.7444 | 0.4091 | 0.7444 |
No log | 7.0667 | 106 | 0.9335 | 0.3874 | 0.9335 |
No log | 7.2 | 108 | 1.1591 | 0.3347 | 1.1591 |
No log | 7.3333 | 110 | 1.1745 | 0.3240 | 1.1745 |
No log | 7.4667 | 112 | 1.0111 | 0.3414 | 1.0111 |
No log | 7.6 | 114 | 0.7911 | 0.3951 | 0.7911 |
No log | 7.7333 | 116 | 0.7124 | 0.4080 | 0.7124 |
No log | 7.8667 | 118 | 0.7261 | 0.4029 | 0.7261 |
No log | 8.0 | 120 | 0.7824 | 0.3852 | 0.7824 |
No log | 8.1333 | 122 | 0.8528 | 0.3744 | 0.8528 |
No log | 8.2667 | 124 | 0.8584 | 0.3744 | 0.8584 |
No log | 8.4 | 126 | 0.8434 | 0.3747 | 0.8434 |
No log | 8.5333 | 128 | 0.8058 | 0.3877 | 0.8058 |
No log | 8.6667 | 130 | 0.7951 | 0.3938 | 0.7951 |
No log | 8.8 | 132 | 0.8278 | 0.3925 | 0.8278 |
No log | 8.9333 | 134 | 0.8301 | 0.3938 | 0.8301 |
No log | 9.0667 | 136 | 0.8599 | 0.3885 | 0.8599 |
No log | 9.2 | 138 | 0.9104 | 0.3734 | 0.9104 |
No log | 9.3333 | 140 | 0.9438 | 0.3692 | 0.9438 |
No log | 9.4667 | 142 | 0.9743 | 0.3664 | 0.9743 |
No log | 9.6 | 144 | 0.9827 | 0.3664 | 0.9827 |
No log | 9.7333 | 146 | 0.9690 | 0.3656 | 0.9690 |
No log | 9.8667 | 148 | 0.9506 | 0.3654 | 0.9506 |
No log | 10.0 | 150 | 0.9408 | 0.3679 | 0.9408 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for salbatarni/arabert_cross_organization_task5_fold1
Base model
aubmindlab/bert-base-arabertv02