YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co./docs/hub/model-cards#model-card-metadata)
Fine-tuned XLM-R Model for bulgarian Sentiment Analysis
This is a fine-tuned XLM-R model for sentiment analysis in bulgarian.
Model Details
- Model Name: XLM-R Sentiment Analysis
- Language: bulgarian
- Fine-tuning Dataset: DGurgurov/bulgarian_sa
Training Details
- Epochs: 20
- Batch Size: 32 (train), 64 (eval)
- Optimizer: AdamW
- Learning Rate: 5e-5
Performance Metrics
- Accuracy: 0.94381
- Macro F1: 0.90809
- Micro F1: 0.94381
Usage
To use this model, you can load it with the Hugging Face Transformers library:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("DGurgurov/xlm-r_bulgarian_sentiment")
model = AutoModelForSequenceClassification.from_pretrained("DGurgurov/xlm-r_bulgarian_sentiment")
License
[MIT]
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.