Update README.md
Browse files
README.md
CHANGED
@@ -32,6 +32,48 @@ This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment
|
|
32 |
|
33 |
The text classification task in this model is based on 3 sentiment labels.
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
## Metrics and results:
|
36 |
|
37 |
It achieves the following results on the evaluation set:
|
@@ -66,6 +108,7 @@ The following hyperparameters were used during training:
|
|
66 |
|
67 |
## CITATION:
|
68 |
|
|
|
69 |
@inproceedings{barbieri-etal-2020-tweeteval,
|
70 |
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
|
71 |
author = "Barbieri, Francesco and
|
@@ -81,7 +124,7 @@ The following hyperparameters were used during training:
|
|
81 |
doi = "10.18653/v1/2020.findings-emnlp.148",
|
82 |
pages = "1644--1650"
|
83 |
}
|
84 |
-
|
85 |
|
86 |
## More Information
|
87 |
|
|
|
32 |
|
33 |
The text classification task in this model is based on 3 sentiment labels.
|
34 |
|
35 |
+
## Full classification example:
|
36 |
+
|
37 |
+
```python
|
38 |
+
from transformers import pipeline
|
39 |
+
|
40 |
+
pipe = pipeline(model="delarosajav95/tw-roberta-base-sentiment-FT")
|
41 |
+
inputs = ["The flat is very nice but it's too expensive and the location is very bad.",
|
42 |
+
"I loved the music, but the crowd was too rowdy to enjoy it properly.",
|
43 |
+
"They believe that I'm stupid and I like waiting for hours in line to buy a simple coffee."
|
44 |
+
]
|
45 |
+
result = pipe(inputs, return_all_scores=True)
|
46 |
+
|
47 |
+
label_mapping = {"LABEL_0": "Negative", "LABEL_1": "Neutral", "LABEL_2": "Positive"}
|
48 |
+
for i, predictions in enumerate(result):
|
49 |
+
print("==================================")
|
50 |
+
print(f"Text {i + 1}: {inputs[i]}")
|
51 |
+
for pred in predictions:
|
52 |
+
label = label_mapping.get(pred['label'], pred['label'])
|
53 |
+
score = pred['score']
|
54 |
+
print(f"{label}: {score:.2%}")
|
55 |
+
```
|
56 |
+
|
57 |
+
Output:
|
58 |
+
|
59 |
+
```python
|
60 |
+
==================================
|
61 |
+
Text 1: The flat is very nice but it's too expensive and the location is very bad.
|
62 |
+
Negative: 0.09%
|
63 |
+
Neutral: 99.88%
|
64 |
+
Positive: 0.03%
|
65 |
+
==================================
|
66 |
+
Text 2: I loved the music, but the crowd was too rowdy to enjoy it properly.
|
67 |
+
Negative: 0.04%
|
68 |
+
Neutral: 99.92%
|
69 |
+
Positive: 0.04%
|
70 |
+
==================================
|
71 |
+
Text 3: They believe that I'm stupid and I like waiting for hours in line to buy a simple coffee.
|
72 |
+
Negative: 69.79%
|
73 |
+
Neutral: 30.12%
|
74 |
+
Positive: 0.09%
|
75 |
+
```
|
76 |
+
|
77 |
## Metrics and results:
|
78 |
|
79 |
It achieves the following results on the evaluation set:
|
|
|
108 |
|
109 |
## CITATION:
|
110 |
|
111 |
+
```bibitex
|
112 |
@inproceedings{barbieri-etal-2020-tweeteval,
|
113 |
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
|
114 |
author = "Barbieri, Francesco and
|
|
|
124 |
doi = "10.18653/v1/2020.findings-emnlp.148",
|
125 |
pages = "1644--1650"
|
126 |
}
|
127 |
+
```
|
128 |
|
129 |
## More Information
|
130 |
|