I noticed unexpected predictions in certain scenarios (I feel lonely and hopeless. Nothing seems to bring me joy.).
#3011
by
richardmtp
- opened
Example Case (Incorrect Prediction):
I tested the model with the following input: "I feel lonely and hopeless. Nothing seems to bring me joy."
Expected Emotion: Sadness
Model Output:
{
'Anger': 0.00085,
'Disgust': 0.00256,
'Fear': 0.00203,
'Joy': 0.00098,
'Sadness': 0.00518,
'Surprise': 0.98749 # Unexpectedly high!
}
Questions for the Community:
Why is "Surprise" predicted instead of "Sadness"?
Could this be due to the dataset the model was fine-tuned on?
Has anyone faced similar misclassifications, and how did you address them?
Would fine-tuning the model on a mental health-specific dataset improve accuracy? If so, what datasets would you recommend?