Model Card for distilbert-snli
Model Details
Model Description
A fine-tuned version of distilbert/distilbert-base-uncased
using the stanford-nlp/snli
dataset.
- Developed by: Karl Weinmeister
- Language(s) (NLP): en
- License: apache-2.0
- Finetuned from model [optional]: distilbert/distilbert-base-uncased
Training Hyperparameters
- Training regime: The model was trained for 5 epochs with batch size 128.
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support sentence-similarity models for transformers library.
Model tree for kweinmeister/distilbert-snli
Base model
distilbert/distilbert-base-uncased