bert-base-uncased finetuned on MNLI

Model Details and Training Data

We used the pretrained model from bert-base-uncased and finetuned it on MultiNLI dataset.

The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).

Evaluation Results

The evaluation results are mentioned in the table below.

Test Corpus Accuracy
Matched 0.8456
Mismatched 0.8484
Downloads last month
2,607
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ishan/bert-base-uncased-mnli

Adapters
4 models