Zerpal-mBERT

How to use

You can use this model directly with a pipeline for masked language modeling:

from transformers import pipeline

unmasker = pipeline('fill-mask', model='udmurtNLP/zerpal-mbert', tokenizer='udmurtNLP/zerpal-mbert-tokenizer')

unmasker("Ӟечбур! Мынам нимы [MASK].")

Here is how to use this model to get the features of a given text in PyTorch:

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('udmurtNLP/zerpal-mbert-tokenizer')
model = BertModel.from_pretrained("udmurtNLP/zerpal-mbert")
text = "Яратон, яратон, мар меда сыӵе тон?"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Downloads last month
8
Safetensors
Model size
187M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including udmurtNLP/zerpal-mbert