--- base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: arabert_cross_relevance_task7_fold1 results: [] --- # arabert_cross_relevance_task7_fold1 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co./aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2223 - Qwk: 0.0339 - Mse: 0.2223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | No log | 0.0351 | 2 | 0.4577 | 0.0305 | 0.4577 | | No log | 0.0702 | 4 | 0.2924 | 0.0268 | 0.2924 | | No log | 0.1053 | 6 | 0.6165 | 0.0339 | 0.6165 | | No log | 0.1404 | 8 | 1.1332 | 0.0071 | 1.1332 | | No log | 0.1754 | 10 | 1.0089 | 0.0084 | 1.0089 | | No log | 0.2105 | 12 | 0.6577 | 0.0383 | 0.6577 | | No log | 0.2456 | 14 | 0.3893 | 0.0245 | 0.3893 | | No log | 0.2807 | 16 | 0.2631 | 0.0199 | 0.2631 | | No log | 0.3158 | 18 | 0.2402 | 0.0199 | 0.2402 | | No log | 0.3509 | 20 | 0.2586 | 0.0199 | 0.2586 | | No log | 0.3860 | 22 | 0.3150 | 0.0288 | 0.3150 | | No log | 0.4211 | 24 | 0.4282 | 0.0399 | 0.4282 | | No log | 0.4561 | 26 | 0.4430 | 0.0382 | 0.4430 | | No log | 0.4912 | 28 | 0.4205 | 0.0399 | 0.4205 | | No log | 0.5263 | 30 | 0.3631 | 0.0433 | 0.3631 | | No log | 0.5614 | 32 | 0.3063 | 0.0507 | 0.3063 | | No log | 0.5965 | 34 | 0.3162 | 0.0488 | 0.3162 | | No log | 0.6316 | 36 | 0.3646 | 0.0451 | 0.3646 | | No log | 0.6667 | 38 | 0.3766 | 0.0433 | 0.3766 | | No log | 0.7018 | 40 | 0.3470 | 0.0469 | 0.3470 | | No log | 0.7368 | 42 | 0.3331 | 0.0436 | 0.3331 | | No log | 0.7719 | 44 | 0.3000 | 0.0495 | 0.3000 | | No log | 0.8070 | 46 | 0.2692 | 0.0439 | 0.2692 | | No log | 0.8421 | 48 | 0.2579 | 0.0381 | 0.2579 | | No log | 0.8772 | 50 | 0.2450 | 0.0401 | 0.2450 | | No log | 0.9123 | 52 | 0.2306 | 0.0339 | 0.2306 | | No log | 0.9474 | 54 | 0.2223 | 0.0339 | 0.2223 | | No log | 0.9825 | 56 | 0.2223 | 0.0339 | 0.2223 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1