--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: RE_NegREF_NSD_Nubes_Training_Test_dataset_xlm_RoBERTa_base_fine_tuned results: [] --- # RE_NegREF_NSD_Nubes_Training_Test_dataset_xlm_RoBERTa_base_fine_tuned This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co./FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3660 - Negref Precision: 0.5752 - Negref Recall: 0.5786 - Negref F1: 0.5769 - Neg Precision: 0.9489 - Neg Recall: 0.9642 - Neg F1: 0.9565 - Nsco Precision: 0.8852 - Nsco Recall: 0.9039 - Nsco F1: 0.8945 - Precision: 0.8507 - Recall: 0.8643 - F1: 0.8574 - Accuracy: 0.9575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Negref Precision | Negref Recall | Negref F1 | Neg Precision | Neg Recall | Neg F1 | Nsco Precision | Nsco Recall | Nsco F1 | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|:--------:| | 0.0268 | 1.0 | 1729 | 0.2889 | 0.5289 | 0.5639 | 0.5458 | 0.9560 | 0.9459 | 0.9509 | 0.8616 | 0.8616 | 0.8616 | 0.8302 | 0.8375 | 0.8338 | 0.9537 | | 0.0233 | 2.0 | 3458 | 0.3034 | 0.5161 | 0.5419 | 0.5287 | 0.9448 | 0.9614 | 0.9530 | 0.8731 | 0.8949 | 0.8838 | 0.8301 | 0.8523 | 0.8411 | 0.9522 | | 0.0205 | 3.0 | 5187 | 0.2748 | 0.5591 | 0.5624 | 0.5608 | 0.9502 | 0.9649 | 0.9575 | 0.8645 | 0.8926 | 0.8783 | 0.8401 | 0.8570 | 0.8485 | 0.9557 | | 0.012 | 4.0 | 6916 | 0.2915 | 0.5238 | 0.5653 | 0.5438 | 0.9525 | 0.9586 | 0.9555 | 0.8693 | 0.8858 | 0.8775 | 0.8310 | 0.8523 | 0.8415 | 0.9558 | | 0.0105 | 5.0 | 8645 | 0.3158 | 0.5526 | 0.5786 | 0.5653 | 0.9515 | 0.9635 | 0.9574 | 0.8663 | 0.9017 | 0.8836 | 0.8377 | 0.8631 | 0.8502 | 0.9559 | | 0.0095 | 6.0 | 10374 | 0.3048 | 0.5424 | 0.6197 | 0.5785 | 0.9504 | 0.9684 | 0.9593 | 0.8786 | 0.9085 | 0.8933 | 0.8348 | 0.8760 | 0.8549 | 0.9584 | | 0.0073 | 7.0 | 12103 | 0.3238 | 0.5511 | 0.6021 | 0.5754 | 0.9528 | 0.9642 | 0.9585 | 0.8751 | 0.8956 | 0.8852 | 0.8386 | 0.8658 | 0.8520 | 0.9558 | | 0.0072 | 8.0 | 13832 | 0.3561 | 0.5737 | 0.5712 | 0.5725 | 0.9547 | 0.9614 | 0.9580 | 0.88 | 0.8986 | 0.8892 | 0.8510 | 0.8596 | 0.8553 | 0.9557 | | 0.0048 | 9.0 | 15561 | 0.3452 | 0.5739 | 0.5815 | 0.5777 | 0.9447 | 0.9719 | 0.9581 | 0.8767 | 0.9092 | 0.8927 | 0.8457 | 0.8701 | 0.8578 | 0.9573 | | 0.0039 | 10.0 | 17290 | 0.3492 | 0.5700 | 0.5742 | 0.5721 | 0.9528 | 0.9642 | 0.9585 | 0.8825 | 0.9032 | 0.8927 | 0.85 | 0.8631 | 0.8565 | 0.9586 | | 0.0019 | 11.0 | 19019 | 0.3629 | 0.5796 | 0.5771 | 0.5784 | 0.9502 | 0.9656 | 0.9579 | 0.8824 | 0.9077 | 0.8949 | 0.8516 | 0.8661 | 0.8588 | 0.9567 | | 0.0007 | 12.0 | 20748 | 0.3660 | 0.5752 | 0.5786 | 0.5769 | 0.9489 | 0.9642 | 0.9565 | 0.8852 | 0.9039 | 0.8945 | 0.8507 | 0.8643 | 0.8574 | 0.9575 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2