BERT Mini Sentiment Analysis – Emotion & Text Classification Model

Model Details

BERT Mini Sentiment Analysis is a lightweight transformer model fine-tuned from Prajjwal's BERT Mini for emotion-based sentiment analysis. It classifies text into various emotional labels such as happiness, sadness, anger, and others.

With 11.2M parameters, this model is fast, efficient, and optimized for real-time applications, making it perfect for low-resource environments like mobile and edge devices.

  • Developed by: Varnika S
  • Model Type: Transformer
  • Language: English (en)
  • License: MIT
  • Finetuned from: Prajjwal's BERT Mini

πŸš€ Key Applications

Use Case Description
Social Media Analysis Analyze sentiment trends on Twitter, Reddit, and Instagram
Customer Feedback Extract insights from product reviews, surveys, and support tickets
Mental Health AI Detect emotional distress in online conversations
AI Chatbots & Virtual Assistants Enable sentiment-aware chatbot responses
Market Research Understand audience reactions to products and services

Example Usage

Use the model easily with the Hugging Face Transformers library:

from transformers import pipeline

# Load the fine-tuned BERT Mini sentiment analysis model
sentiment_analysis = pipeline("text-classification", model="Varnikasiva/sentiment-classification-bert-mini")

# Analyze sentiment
result = sentiment_analysis("I feel amazing today!")
print(result)  # Output: [{'label': 'happy'}]

πŸš€ Try it here: Hugging Face Model Page


πŸ”₯ Model Performance

Metric Score
Accuracy High
Inference Speed ⚑ Ultra-fast
Model Size 11.2M Parameters
Fine-Tuned On Emotion-Labeled Dataset

πŸ“Œ How to Fine-Tune Further?

To fine-tune this model on your own dataset, use Hugging Face's Trainer API or PyTorch Lightning:

from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./results",
    evaluation_strategy="epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=3,
    weight_decay=0.01,
)

This allows you to adapt the model to specific domains, such as finance, healthcare, or customer service.


πŸ’‘ Frequently Asked Questions (FAQ)

Q1: What datasets were used for fine-tuning?

A: This model was fine-tuned on an emotion-labeled dataset, ensuring high accuracy for detecting happiness, sadness, anger, and more.

Q2: Can I use this model for real-time applications?

A: Yes! The model is optimized for low-latency and high-speed inference, making it perfect for chatbots, social media monitoring, and real-time sentiment analysis.

Q3: How can I fine-tune this model further?

A: You can fine-tune it on your own data using Hugging Face's Trainer API or PyTorch Lightning for better domain-specific performance.


πŸ”— Additional Resources


πŸš€ Contribute & Give Feedback

Feel free to contribute to this project or provide feedback to help improve the model. If you encounter issues or have feature requests, please reach out! 🎯

Happy Coding! πŸš€

Downloads last month
166
Safetensors
Model size
11.2M params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Varnikasiva/sentiment-classification-bert-mini

Finetuned
(7)
this model

Space using Varnikasiva/sentiment-classification-bert-mini 1