poudel's picture
Update README.md
fbd7818 verified
|
raw
history blame
5.52 kB
metadata
library_name: transformers
license: apache-2.0
datasets:
  - Private
language:
  - en
metrics:
  - accuracy
  - precision
  - recall
  - f1
base_model: google-bert/bert-base-uncased
pipeline_tag: text-classification

Model Card for Model ID

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Ashish Poudel
  • Model type: Text Classification
  • Language(s) (NLP): English (en)
  • License: apache-2.0
  • Finetuned from model: apache-2.0

Model Sources [optional]

Uses

Direct Use

Downstream Use [optional]

Out-of-Scope Use

Bias, Risks, and Limitations

Recommendations

How to Get Started with the Model

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("poudel/sentiment-classifier")
tokenizer = AutoTokenizer.from_pretrained("poudel/sentiment-classifier")

inputs = tokenizer("I feel hopeless.", return_tensors="pt")
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits).item()


## Training Details

### Training Data

<!-- The model was trained on a custom dataset of tweets labeled as either depression-related or not. Data pre-processing included tokenization and removal of special characters. -->


### Training Procedure

<!-- The model was trained using Hugging Face's transformers library. The training was conducted on a T4 GPU over 3 epochs, with a batch size of 16 and a learning rate of 5e-5. -->

#### Preprocessing

<!-- Text was lowercased, and special characters were removed as well as Tokenization was done using the bert-base-uncased tokenizer.-->


#### Training Hyperparameters

- **Training regime:** <!--fp32 -->
- **Epochs:** <!--3 -->
- ** Learning rate:** <!--5e-5-->
- **Batch size:** <!--16 -->

#### Speeds, Sizes, Times 

<!--Training was conducted for approximately 1 hour on a T4 GPU in Google Colab. -->


## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

<!-- The model was evaluated on a 20% holdout set from the custom dataset. -->


#### Metrics

<!-- The model was evaluated using accuracy, precision, recall, and F1 score. -->


### Results

Accuracy: 99.87%
Precision: 99.91%
Recall: 99.81%
F1 Score: 99.86% 

#### Summary
The model achieved high performance across all key metrics, indicating strong predictive capabilities for the text classification task.



## Environmental Impact

<!-- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). -->

- **Hardware Type:** [T4 GPU]
- **Hours used:** [ 1 hour]
- **Cloud Provider:** [Google Cloud (Colab)]
- **Carbon Emitted:** [Estimated at 0.45 kg CO2eq]

## Technical Specifications [The model uses the BERT (bert-base-uncased) architecture and was fine-tuned for binary classification (depression vs non-depression).]

### Model Architecture and Objective

#### Hardware

[T4 GPU]

#### Software

[Hugging Face transformers library.]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[@misc{poudel2024sentimentclassifier,
  author = {Poudel, Ashish},
  title = {Sentiment Classifier for Depression},
  year = {2024},
  url = {https://huggingface.co./poudel/sentiment-classifier},
}
]

**APA:**

[Poudel, A. (2024). Sentiment Classifier for Depression. Retrieved from https://huggingface.co./poudel/sentiment-classifier.]


## Model Card Authors 

[Ashish Poudel]

## Model Card Contact

[[email protected]]