Edit model card

A question generation model trained on SQuAD and Spoken-SQuAD

Example usage:

from transformers import BartConfig, BartForConditionalGeneration, BartTokenizer

model_name = "alinet/bart-base-spoken-squad-qg"

tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name) 

def run_model(input_string, **generator_args):
  input_ids = tokenizer.encode(input_string, return_tensors="pt")
  res = model.generate(input_ids, **generator_args)
  output = tokenizer.batch_decode(res, skip_special_tokens=True)
  print(output)

run_model("Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.", max_length=32, num_beams=4)
# ['What is a reading comprehension dataset?']
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train alinet/bart-base-spoken-squad-qg

Evaluation results