Edit model card

Erlangshen-Roberta-110M-Similarity, model (Chinese),one model of Fengshenbang-LM.

We collect 20 paraphrace datasets in the Chinese domain for finetune, with a total of 2773880 samples. Our model is mainly based on roberta

Usage

from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch

tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Similarity')
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Similarity')

texta='今天的饭不好吃'
textb='今天心情不好'

output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))

Scores on downstream chinese tasks(The dev datasets of BUSTM and AFQMC may exist in the train set)

Model BQ BUSTM AFQMC
Erlangshen-Roberta-110M-Similarity 85.41 95.18 81.72
Erlangshen-Roberta-330M-Similarity 86.21 99.29 93.89
Erlangshen-MegatronBert-1.3B-Similarity 86.31 - -

Citation

If you find the resource is useful, please cite the following website in your paper.

@misc{Fengshenbang-LM,
  title={Fengshenbang-LM},
  author={IDEA-CCNL},
  year={2021},
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.