arabert_cross_vocabulary_task6_fold0
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7920
- Qwk: 0.5432
- Mse: 0.7919
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.0328 | 2 | 3.5613 | 0.0346 | 3.5560 |
No log | 0.0656 | 4 | 2.1952 | 0.1017 | 2.1902 |
No log | 0.0984 | 6 | 1.5039 | 0.1862 | 1.5004 |
No log | 0.1311 | 8 | 1.3393 | 0.2671 | 1.3373 |
No log | 0.1639 | 10 | 1.8925 | 0.2535 | 1.8914 |
No log | 0.1967 | 12 | 1.1957 | 0.3908 | 1.1952 |
No log | 0.2295 | 14 | 0.6924 | 0.5648 | 0.6919 |
No log | 0.2623 | 16 | 0.7305 | 0.5685 | 0.7301 |
No log | 0.2951 | 18 | 0.8936 | 0.5297 | 0.8935 |
No log | 0.3279 | 20 | 1.3379 | 0.4110 | 1.3384 |
No log | 0.3607 | 22 | 1.3337 | 0.4094 | 1.3343 |
No log | 0.3934 | 24 | 0.9456 | 0.4711 | 0.9457 |
No log | 0.4262 | 26 | 0.7156 | 0.5222 | 0.7155 |
No log | 0.4590 | 28 | 0.7226 | 0.5158 | 0.7225 |
No log | 0.4918 | 30 | 0.8353 | 0.4710 | 0.8352 |
No log | 0.5246 | 32 | 1.0175 | 0.4335 | 1.0176 |
No log | 0.5574 | 34 | 1.0983 | 0.4223 | 1.0984 |
No log | 0.5902 | 36 | 1.0847 | 0.4257 | 1.0848 |
No log | 0.6230 | 38 | 0.9629 | 0.4534 | 0.9630 |
No log | 0.6557 | 40 | 0.9060 | 0.4715 | 0.9060 |
No log | 0.6885 | 42 | 0.8296 | 0.4966 | 0.8296 |
No log | 0.7213 | 44 | 0.8084 | 0.5180 | 0.8083 |
No log | 0.7541 | 46 | 0.8344 | 0.5130 | 0.8343 |
No log | 0.7869 | 48 | 0.8968 | 0.4927 | 0.8968 |
No log | 0.8197 | 50 | 0.9144 | 0.4897 | 0.9143 |
No log | 0.8525 | 52 | 0.9014 | 0.4933 | 0.9014 |
No log | 0.8852 | 54 | 0.8595 | 0.5177 | 0.8595 |
No log | 0.9180 | 56 | 0.8289 | 0.5316 | 0.8288 |
No log | 0.9508 | 58 | 0.8059 | 0.5408 | 0.8058 |
No log | 0.9836 | 60 | 0.7920 | 0.5432 | 0.7919 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for salbatarni/arabert_cross_vocabulary_task6_fold0
Base model
aubmindlab/bert-base-arabertv02