crosloengual-bert-si-nli
CroSloEngual BERT model finetuned on the SI-NLI dataset for Slovene natural language inference.
Fine-tuned in a classic sequence pair classification setting on the official training/validation/test split for 10 epochs, using validation set accuracy for model selection.
Optimized using the AdamW optimizer (learning rate 2e-5) and cross-entropy loss.
Using batch size 82
(selected based on the available GPU memory) and maximum sequence length 107
(99th percentile of the lengths in the training set).
Achieves the following metrics:
- best validation accuracy:
0.660
- test accuracy =
0.673
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.