--- library_name: transformers license: apache-2.0 datasets: - Private language: - en metrics: - accuracy - precision - recall - f1 base_model: google-bert/bert-base-uncased pipeline_tag: text-classification --- # Model Card for Model ID ## Model Details ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Ashish Poudel - **Model type:** Text Classification - **Language(s) (NLP):** English (`en`) - **License:** `apache-2.0` - **Finetuned from model:** `apache-2.0` ### Model Sources [optional] - **Repository:** [Sentiment Classifier for Depression](https://huggingface.co./poudel/sentiment-classifier) - **Demo [optional]:** [Live Gradio App](https://huggingface.co./spaces/poudel/Sentiment_classifier) ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations ## How to Get Started with the Model ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("poudel/sentiment-classifier") tokenizer = AutoTokenizer.from_pretrained("poudel/sentiment-classifier") inputs = tokenizer("I feel hopeless.", return_tensors="pt") outputs = model(**inputs) predicted_class = torch.argmax(outputs.logits).item() ## Training Details ### Training Data ### Training Procedure #### Preprocessing #### Training Hyperparameters - **Training regime:** - **Epochs:** - ** Learning rate:** - **Batch size:** #### Speeds, Sizes, Times ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Metrics ### Results Accuracy: 99.87% Precision: 99.91% Recall: 99.81% F1 Score: 99.86% #### Summary The model achieved high performance across all key metrics, indicating strong predictive capabilities for the text classification task. ## Environmental Impact - **Hardware Type:** [T4 GPU] - **Hours used:** [ 1 hour] - **Cloud Provider:** [Google Cloud (Colab)] - **Carbon Emitted:** [Estimated at 0.45 kg CO2eq] ## Technical Specifications [The model uses the BERT (bert-base-uncased) architecture and was fine-tuned for binary classification (depression vs non-depression).] ### Model Architecture and Objective #### Hardware [T4 GPU] #### Software [Hugging Face transformers library.] ## Citation [optional] **BibTeX:** [@misc{poudel2024sentimentclassifier, author = {Poudel, Ashish}, title = {Sentiment Classifier for Depression}, year = {2024}, url = {https://huggingface.co./poudel/sentiment-classifier}, } ] **APA:** [Poudel, A. (2024). Sentiment Classifier for Depression. Retrieved from https://huggingface.co./poudel/sentiment-classifier.] ## Model Card Authors [Ashish Poudel] ## Model Card Contact [ashishpoudel112@gmail.com]