ajtamayoh's picture
Fine-tuning completed
c469714 verified
metadata
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-bne
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: RE_NegREF_NSD_Nubes_Training_Test_dataset_RoBERTa_base_bne_fine_tuned
    results: []

RE_NegREF_NSD_Nubes_Training_Test_dataset_RoBERTa_base_bne_fine_tuned

This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3480
  • Negref Precision: 0.5338
  • Negref Recall: 0.5565
  • Negref F1: 0.5449
  • Neg Precision: 0.9552
  • Neg Recall: 0.9593
  • Neg F1: 0.9573
  • Nsco Precision: 0.8862
  • Nsco Recall: 0.9070
  • Nsco F1: 0.8964
  • Precision: 0.8428
  • Recall: 0.8591
  • F1: 0.8509
  • Accuracy: 0.9569

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 12

Training results

Training Loss Epoch Step Validation Loss Negref Precision Negref Recall Negref F1 Neg Precision Neg Recall Neg F1 Nsco Precision Nsco Recall Nsco F1 Precision Recall F1 Accuracy
0.0092 1.0 1729 0.3395 0.5044 0.5051 0.5048 0.9546 0.9600 0.9573 0.8681 0.8911 0.8794 0.8323 0.8430 0.8376 0.9552
0.0096 2.0 3458 0.2674 0.4694 0.5962 0.5252 0.9483 0.9663 0.9572 0.8717 0.8888 0.8801 0.8070 0.8629 0.8340 0.9545
0.0069 3.0 5187 0.2922 0.5054 0.5477 0.5257 0.9488 0.9628 0.9557 0.8740 0.8865 0.8802 0.8275 0.8509 0.8390 0.9559
0.0066 4.0 6916 0.2958 0.5371 0.5639 0.5501 0.9533 0.9614 0.9573 0.8714 0.8971 0.8841 0.8368 0.8576 0.8471 0.9559
0.0042 5.0 8645 0.2978 0.5475 0.5419 0.5446 0.9584 0.9551 0.9567 0.8850 0.8964 0.8906 0.8491 0.8503 0.8497 0.9558
0.0072 6.0 10374 0.2875 0.5068 0.5477 0.5265 0.9547 0.9614 0.9580 0.8791 0.8911 0.8850 0.8319 0.8521 0.8419 0.9561
0.0045 7.0 12103 0.3203 0.5551 0.5624 0.5587 0.9561 0.9628 0.9594 0.8744 0.9002 0.8871 0.8448 0.8591 0.8519 0.9567
0.0019 8.0 13832 0.3412 0.5339 0.5433 0.5386 0.9565 0.9579 0.9572 0.8921 0.9009 0.8965 0.8468 0.8535 0.8502 0.9560
0.0013 9.0 15561 0.3270 0.5033 0.5551 0.5279 0.9533 0.9600 0.9566 0.8900 0.9062 0.8981 0.8335 0.8588 0.8459 0.9574
0.0009 10.0 17290 0.3257 0.5285 0.5580 0.5429 0.9513 0.9593 0.9552 0.8826 0.9039 0.8931 0.8381 0.8582 0.8480 0.9567
0.0007 11.0 19019 0.3303 0.5299 0.5595 0.5443 0.9586 0.9586 0.9586 0.8859 0.9047 0.8952 0.8423 0.8585 0.8503 0.9574
0.0 12.0 20748 0.3480 0.5338 0.5565 0.5449 0.9552 0.9593 0.9573 0.8862 0.9070 0.8964 0.8428 0.8591 0.8509 0.9569

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2