xlm-roberta-base-finetuned-pquad-squad
This model is a version of xlm-roberta-base fine-tuned jointly on PQuAD and SQuAD_v2 datasets.
Results
Trained only for 3000/12000 steps (1/4th of an epoch) due to computational restrictions.
{'exact': 66.57766134314072, 'f1': 73.79323871739385, 'total': 19849, 'HasAns_exact': 64.51396460622327, 'HasAns_f1': 76.52620945245035, 'HasAns_total': 11923, 'NoAns_exact': 69.68205904617714, 'NoAns_f1': 69.68205904617714, 'NoAns_total': 7926, 'best_exact': 66.57766134314072, 'best_exact_thresh': 0.0, 'best_f1': 73.79323871739267, 'best_f1_thresh': 0.0}
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.