poudel commited on
Commit
64fe88e
1 Parent(s): dd3e782

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -151
README.md CHANGED
@@ -1,177 +1,79 @@
1
  ---
2
  library_name: transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license: apache-2.0
4
- datasets:
5
- - Private
6
  language:
7
  - en
8
- metrics:
9
- - accuracy
10
- - precision
11
- - recall
12
- - f1
13
  base_model: google-bert/bert-base-uncased
14
- pipeline_tag: text-classification
15
  ---
16
 
17
- # Model Card for Model ID
18
 
19
- This is a fine-tuned BERT model (`bert-base-uncased`) used for classifying text into two categories: **Depression** or **Non-depression**. The model is designed for text classification and has been trained on a custom dataset of mental health-related posts from social media.
20
 
 
 
21
 
22
- ### Model Description
 
23
 
24
- This model aims to identify signs of depression in written text. It was trained on social media posts labeled as either indicative of depression or not. The model uses the BERT architecture for text classification and was fine-tuned specifically for this task.
25
-
26
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
27
-
28
- - **Developed by:** Ashish Poudel
29
- - **Model type:** Text Classification
30
- - **Language(s) (NLP):** English (`en`)
31
- - **License:** `apache-2.0`
32
- - **Finetuned from model:** `apache-2.0`
33
-
34
- ### Model Sources [optional]
35
-
36
-
37
- - **Repository:** [Sentiment Classifier for Depression](https://huggingface.co/poudel/sentiment-classifier)
38
- - **Demo [optional]:** [Live Gradio App](https://huggingface.co/spaces/poudel/Sentiment_classifier)
39
-
40
-
41
- ### Use
42
-
43
- This model is designed to classify text as either depression-related or non-depression-related. It can be used in social media sentiment analysis, mental health research, and automated text analysis systems.
44
-
45
-
46
- ### Downstream Use
47
-
48
- The model can be further fine-tuned for other types of sentiment analysis tasks related to mental health.
49
-
50
-
51
- ### Out-of-Scope Use
52
-
53
- The model should not be used for clinical diagnosis or decision-making without the input of medical professionals. It is also unsuitable for text that is not in English or very short/ambiguous inputs.
54
-
55
-
56
-
57
- ## Bias, Risks, and Limitations
58
- The model may suffer from biases inherent in the dataset, such as overrepresenting certain language patterns. It is trained on social media posts, which may not capture all the nuances of real-world conversations about mental health
59
-
60
-
61
- ### Recommendations
62
-
63
- Users should use the model with caution in sensitive applications such as mental health monitoring. It is advised that the model be used alongside professional judgment.
64
-
65
-
66
- ## How to Get Started with the Model
67
-
68
- ```python
69
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
70
-
71
- model = AutoModelForSequenceClassification.from_pretrained("poudel/sentiment-classifier")
72
- tokenizer = AutoTokenizer.from_pretrained("poudel/sentiment-classifier")
73
-
74
- inputs = tokenizer("I feel hopeless.", return_tensors="pt")
75
- outputs = model(**inputs)
76
- predicted_class = torch.argmax(outputs.logits).item()
77
-
78
-
79
- ## Training Details
80
-
81
- ### Training Data
82
-
83
- The model was trained on a custom dataset of tweets labeled as either depression-related or not. Data pre-processing included tokenization and removal of special characters.
84
-
85
-
86
- ### Training Procedure
87
-
88
- The model was trained using Hugging Face's transformers library. The training was conducted on a T4 GPU over 3 epochs, with a batch size of 16 and a learning rate of 5e-5.
89
-
90
- #### Preprocessing
91
-
92
- Text was lowercased, and special characters were removed as well as Tokenization was done using the bert-base-uncased tokenizer.
93
-
94
-
95
- #### Training Hyperparameters
96
-
97
- - **Training regime:** fp32
98
- - **Epochs:** 3
99
- - **Learning rate:** 5e-5
100
- - **Batch size:** 16
101
-
102
- #### Speeds, Sizes, Times
103
-
104
- Training was conducted for approximately 1 hour on a T4 GPU in Google Colab.
105
-
106
-
107
-
108
-
109
- #### Evaluation and Testing Data
110
-
111
- The model was evaluated on a 20% holdout set from the custom dataset.
112
-
113
-
114
- #### Metrics
115
- The model was evaluated using accuracy, precision, recall, and F1 score.
116
-
117
-
118
- ### Results
119
-
120
- Accuracy: 99.87%
121
- Precision: 99.91%
122
- Recall: 99.81%
123
- F1 Score: 99.86%
124
-
125
- #### Summary
126
- The model achieved high performance across all key metrics, indicating strong predictive capabilities for the text classification task.
127
 
 
128
 
 
 
 
 
129
 
130
  ## Environmental Impact
 
131
 
132
- <!-- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
133
-
134
- - **Hardware Type:** [T4 GPU]
135
- - **Hours used:** [ 1 hour]
136
- - **Cloud Provider:** [Google Cloud (Colab)]
137
- - **Carbon Emitted:** [Estimated at 0.45 kg CO2eq]
138
-
139
- ## Technical Specifications
140
-
141
- The model uses the BERT (bert-base-uncased) architecture and was fine-tuned for binary classification (depression vs non-depression).
142
-
143
- ### Model Architecture and Objective
144
-
145
- #### Hardware
146
-
147
- T4 GPU
148
 
149
- #### Software
150
- Hugging Face transformers library.
151
 
152
- ## Citation [optional]
 
 
153
 
154
- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
155
 
156
  **BibTeX:**
157
-
158
- [@misc{poudel2024sentimentclassifier,
159
  author = {Poudel, Ashish},
160
  title = {Sentiment Classifier for Depression},
161
  year = {2024},
162
  url = {https://huggingface.co/poudel/sentiment-classifier},
163
- }
164
- ]
165
-
166
- **APA:**
167
-
168
- Poudel, A. (2024). Sentiment Classifier for Depression. Retrieved from https://huggingface.co/poudel/sentiment-classifier.
169
-
170
-
171
- ## Model Card Authors
172
-
173
- [Ashish Poudel]
174
-
175
- ## Model Card Contact
176
-
177
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - text-classification
5
+ - sentiment-analysis
6
+ - depression
7
+ - BERT
8
+ - mental-health
9
+ model-index:
10
+ - name: Sentiment Classifier for Depression
11
+ results:
12
+ - task:
13
+ type: text-classification
14
+ dataset:
15
+ name: Custom Depression Tweets Dataset
16
+ type: custom
17
+ metrics:
18
+ - name: Accuracy
19
+ type: accuracy
20
+ value: 99.87
21
+ - name: Precision
22
+ type: precision
23
+ value: 99.91
24
+ - name: Recall
25
+ type: recall
26
+ value: 99.81
27
+ - name: F1 Score
28
+ type: f1
29
+ value: 99.86
30
  license: apache-2.0
 
 
31
  language:
32
  - en
 
 
 
 
 
33
  base_model: google-bert/bert-base-uncased
 
34
  ---
35
 
36
+ # Model Card for Sentiment Classifier for Depression
37
 
38
+ This model is a fine-tuned version of BERT (`bert-base-uncased`) for classifying text as either **Depression** or **Non-depression**. The model was trained on a custom dataset of mental health-related social media posts and has shown high accuracy in sentiment classification.
39
 
40
+ ## Training Data
41
+ The model was trained on a custom dataset of tweets labeled as either depression-related or not. Data pre-processing included tokenization and removal of special characters.
42
 
43
+ ## Training Procedure
44
+ The model was trained using Hugging Face's `transformers` library. The training was conducted on a T4 GPU over 3 epochs, with a batch size of 16 and a learning rate of 5e-5.
45
 
46
+ ## Evaluation and Testing Data
47
+ The model was evaluated on a 20% holdout set from the custom dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
+ ## Results
50
 
51
+ - **Accuracy:** 99.87%
52
+ - **Precision:** 99.91%
53
+ - **Recall:** 99.81%
54
+ - **F1 Score:** 99.86%
55
 
56
  ## Environmental Impact
57
+ The carbon emissions from training this model can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
58
 
59
+ - **Hardware Type:** T4 GPU
60
+ - **Hours used:** 1 hour
61
+ - **Cloud Provider:** Google Cloud (Colab)
62
+ - **Carbon Emitted:** Estimated at 0.45 kg CO2eq
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
+ ## Technical Specifications
 
65
 
66
+ - **Architecture**: BERT (`bert-base-uncased`)
67
+ - **Training Hardware**: T4 GPU in Colab
68
+ - **Training Library**: Hugging Face `transformers`
69
 
70
+ ## Citation
71
 
72
  **BibTeX:**
73
+ ```bibtex
74
+ @misc{poudel2024sentimentclassifier,
75
  author = {Poudel, Ashish},
76
  title = {Sentiment Classifier for Depression},
77
  year = {2024},
78
  url = {https://huggingface.co/poudel/sentiment-classifier},
79
+ }