Edit model card

xml-roberta-large-finetuned-ner

Los siguientes son los resultados sobre el conjunto de evaluación:

  • 'eval_loss': 0.0929097980260849,
  • 'eval_precision': 0.8704318936877077,
  • 'eval_recall': 0.8833942118572633,
  • 'eval_f1': 0.8768651513038628,
  • 'eval_accuracy': 0.982701988941157,

Model description

Este es el modelo más grande de roberta FacebookAI/xlm-roberta-large-finetuned-conll03-english- Este modelo fue ajustado usando el framework Kaggle [https://www.kaggle.com/settings]. Para realizar el preentrenamiento del modelo se tuvo que crear un directorio temporal en Kaggle con el fin de almacenar de manera temoporal el modelo que pesa alrededor de 35 Gz.

The following hyperparameters were used during training:

  • eval_strategy="epoch",
  • save_strategy="epoch",
  • learning_rate=2e-5, # (Aprendizaje se esta cambiando)
  • per_device_train_batch_size=16,
  • per_device_eval_batch_size=16,
  • num_train_epochs=5,
  • weight_decay=0.1,
  • max_grad_norm=1.0,
  • adam_epsilon=1e-5,
  • fp16=True,
  • save_total_limit=2,
  • load_best_model_at_end=True,
  • push_to_hub=True,
  • metric_for_best_model="f1",
  • seed=42,
Metric Value
eval_loss 0.12918254733085632
eval_precision 0.8674463937621832
eval_recall 0.8752458555774094
eval_f1 0.8713286713286713
eval_accuracy 0.9813980358174466
eval_runtime 3.6357
eval_samples_per_second 417.526
eval_steps_per_second 26.13
epoch 5.0
Label Precision Recall F1 Number
LOC 0.8867924528301887 0.8238007380073801 0.8541367766618843 1084
MISC 0.7349726775956285 0.7911764705882353 0.7620396600566574 340
ORG 0.8400272294077604 0.8814285714285715 0.8602300453119553 1400
PER 0.9599465954606141 0.9782312925170068 0.9690026954177898 735
Downloads last month
12
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KPOETA/BERTO-LOS-MUCHACHOS-1

Dataset used to train KPOETA/BERTO-LOS-MUCHACHOS-1