Model Overview
This is a RoBERTa-Large QA Model trained from https://huggingface.co./roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
Data
Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA
Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details.
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Datasets used to train mbartolo/roberta-large-synqa
Evaluation results
- Exact Match on squadvalidation set self-reported89.653
- F1 on squadvalidation set self-reported94.817
- Exact Match on adversarial_qavalidation set self-reported55.333
- F1 on adversarial_qavalidation set self-reported66.746