xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA Model Card
Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="hapandya/xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA")
Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("hapandya/xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA") model = AutoModelForQuestionAnswering.from_pretrained("hapandya/xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA"
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.