Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Fine-tuned XLM-R Model for hebrew Sentiment Analysis

This is a fine-tuned XLM-R model for sentiment analysis in hebrew.

Model Details

  • Model Name: XLM-R Sentiment Analysis
  • Language: hebrew
  • Fine-tuning Dataset: DGurgurov/hebrew_sa

Training Details

  • Epochs: 20
  • Batch Size: 32 (train), 64 (eval)
  • Optimizer: AdamW
  • Learning Rate: 5e-5

Performance Metrics

  • Accuracy: 0.92106
  • Macro F1: 0.90782
  • Micro F1: 0.92106

Usage

To use this model, you can load it with the Hugging Face Transformers library:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("DGurgurov/xlm-r_hebrew_sentiment")
model = AutoModelForSequenceClassification.from_pretrained("DGurgurov/xlm-r_hebrew_sentiment")

License

[MIT]

Downloads last month
34
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.