Mobile-Bert fine-tuned on Squad V2 dataset

This is based on mobile bert architecture suitable for handy devices or device with low resources.

usage

using transformers library first load model and Tokenizer

from transformers import AutoModelForQuestionAnswering,  AutoTokenizer, pipeline

model_name = "aware-ai/mobilebert-squadv2"

model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

use question answering pipeline

qa_engine = pipeline('question-answering', model=model, tokenizer=tokenizer)
QA_input = {
    'question': 'your question?',
    'context': '. your context ................ '
}
res = qa_engine (QA_input)
flozi00 changed pull request status to merged

Sign up or log in to comment