Question Answering
Transformers
PyTorch
English
roberta
Eval Results
Inference Endpoints

Model Overview

This is a RoBERTa-Large QA Model trained from https://huggingface.co./roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD as well as Wikipedia passages external to SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.

Data

Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA

Training Process

Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.

Additional Information

Please refer to https://arxiv.org/abs/2104.08678 for full details.

Downloads last month
119
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train mbartolo/roberta-large-synqa-ext

Evaluation results