arabert_cross_organization_task2_fold5
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4864
- Qwk: 0.6953
- Mse: 0.4868
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.1333 | 2 | 1.3091 | 0.2225 | 1.3089 |
No log | 0.2667 | 4 | 0.8804 | 0.3887 | 0.8814 |
No log | 0.4 | 6 | 1.3858 | 0.5574 | 1.3875 |
No log | 0.5333 | 8 | 0.8835 | 0.6154 | 0.8852 |
No log | 0.6667 | 10 | 0.7015 | 0.4522 | 0.7026 |
No log | 0.8 | 12 | 0.6464 | 0.5054 | 0.6475 |
No log | 0.9333 | 14 | 0.7433 | 0.7082 | 0.7448 |
No log | 1.0667 | 16 | 0.8118 | 0.7522 | 0.8134 |
No log | 1.2 | 18 | 0.6135 | 0.7499 | 0.6147 |
No log | 1.3333 | 20 | 0.5249 | 0.6739 | 0.5259 |
No log | 1.4667 | 22 | 0.5390 | 0.7248 | 0.5401 |
No log | 1.6 | 24 | 0.6195 | 0.7729 | 0.6207 |
No log | 1.7333 | 26 | 0.5748 | 0.7628 | 0.5759 |
No log | 1.8667 | 28 | 0.5013 | 0.7261 | 0.5022 |
No log | 2.0 | 30 | 0.5356 | 0.7773 | 0.5366 |
No log | 2.1333 | 32 | 0.5953 | 0.7981 | 0.5964 |
No log | 2.2667 | 34 | 0.6287 | 0.8036 | 0.6298 |
No log | 2.4 | 36 | 0.5427 | 0.7642 | 0.5438 |
No log | 2.5333 | 38 | 0.4766 | 0.6805 | 0.4773 |
No log | 2.6667 | 40 | 0.4783 | 0.6777 | 0.4789 |
No log | 2.8 | 42 | 0.5040 | 0.7282 | 0.5048 |
No log | 2.9333 | 44 | 0.5170 | 0.7463 | 0.5179 |
No log | 3.0667 | 46 | 0.4870 | 0.7115 | 0.4876 |
No log | 3.2 | 48 | 0.5068 | 0.7429 | 0.5075 |
No log | 3.3333 | 50 | 0.5038 | 0.7270 | 0.5044 |
No log | 3.4667 | 52 | 0.4985 | 0.7013 | 0.4989 |
No log | 3.6 | 54 | 0.4972 | 0.6846 | 0.4975 |
No log | 3.7333 | 56 | 0.4943 | 0.6912 | 0.4947 |
No log | 3.8667 | 58 | 0.5072 | 0.7104 | 0.5078 |
No log | 4.0 | 60 | 0.5132 | 0.7203 | 0.5139 |
No log | 4.1333 | 62 | 0.5679 | 0.7897 | 0.5689 |
No log | 4.2667 | 64 | 0.5555 | 0.7899 | 0.5566 |
No log | 4.4 | 66 | 0.4706 | 0.6934 | 0.4712 |
No log | 4.5333 | 68 | 0.4707 | 0.6425 | 0.4710 |
No log | 4.6667 | 70 | 0.4678 | 0.6421 | 0.4680 |
No log | 4.8 | 72 | 0.4657 | 0.6697 | 0.4660 |
No log | 4.9333 | 74 | 0.4644 | 0.7034 | 0.4648 |
No log | 5.0667 | 76 | 0.4691 | 0.7052 | 0.4695 |
No log | 5.2 | 78 | 0.4782 | 0.7024 | 0.4787 |
No log | 5.3333 | 80 | 0.4778 | 0.6948 | 0.4782 |
No log | 5.4667 | 82 | 0.4788 | 0.6806 | 0.4792 |
No log | 5.6 | 84 | 0.4857 | 0.7050 | 0.4862 |
No log | 5.7333 | 86 | 0.4827 | 0.7024 | 0.4832 |
No log | 5.8667 | 88 | 0.4770 | 0.6872 | 0.4774 |
No log | 6.0 | 90 | 0.4795 | 0.6277 | 0.4798 |
No log | 6.1333 | 92 | 0.4708 | 0.6656 | 0.4712 |
No log | 6.2667 | 94 | 0.4697 | 0.7074 | 0.4702 |
No log | 6.4 | 96 | 0.4653 | 0.6945 | 0.4656 |
No log | 6.5333 | 98 | 0.4668 | 0.6693 | 0.4671 |
No log | 6.6667 | 100 | 0.4669 | 0.6885 | 0.4672 |
No log | 6.8 | 102 | 0.4696 | 0.6978 | 0.4700 |
No log | 6.9333 | 104 | 0.4724 | 0.6954 | 0.4727 |
No log | 7.0667 | 106 | 0.4741 | 0.6619 | 0.4743 |
No log | 7.2 | 108 | 0.4803 | 0.6229 | 0.4804 |
No log | 7.3333 | 110 | 0.4719 | 0.6396 | 0.4721 |
No log | 7.4667 | 112 | 0.4673 | 0.7015 | 0.4677 |
No log | 7.6 | 114 | 0.4766 | 0.7129 | 0.4772 |
No log | 7.7333 | 116 | 0.4720 | 0.7071 | 0.4725 |
No log | 7.8667 | 118 | 0.4730 | 0.6457 | 0.4733 |
No log | 8.0 | 120 | 0.4829 | 0.6097 | 0.4831 |
No log | 8.1333 | 122 | 0.4865 | 0.6055 | 0.4866 |
No log | 8.2667 | 124 | 0.4796 | 0.6640 | 0.4799 |
No log | 8.4 | 126 | 0.4806 | 0.6954 | 0.4811 |
No log | 8.5333 | 128 | 0.4865 | 0.7119 | 0.4870 |
No log | 8.6667 | 130 | 0.4841 | 0.7139 | 0.4846 |
No log | 8.8 | 132 | 0.4802 | 0.6936 | 0.4806 |
No log | 8.9333 | 134 | 0.4820 | 0.6650 | 0.4823 |
No log | 9.0667 | 136 | 0.4843 | 0.6546 | 0.4846 |
No log | 9.2 | 138 | 0.4842 | 0.6572 | 0.4845 |
No log | 9.3333 | 140 | 0.4837 | 0.6715 | 0.4840 |
No log | 9.4667 | 142 | 0.4849 | 0.6959 | 0.4853 |
No log | 9.6 | 144 | 0.4863 | 0.6988 | 0.4867 |
No log | 9.7333 | 146 | 0.4865 | 0.6953 | 0.4869 |
No log | 9.8667 | 148 | 0.4864 | 0.6953 | 0.4868 |
No log | 10.0 | 150 | 0.4864 | 0.6953 | 0.4868 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for salbatarni/arabert_cross_organization_task2_fold5
Base model
aubmindlab/bert-base-arabertv02