Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
multi-class-classification
Languages:
English
Size:
100K - 1M
Tags:
emotion-classification
License:
Update files from the datasets library (from 1.9.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.9.0
- README.md +23 -5
- dataset_infos.json +1 -1
- emotion.py +2 -0
README.md
CHANGED
@@ -1,6 +1,24 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
languages:
|
3 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
paperswithcode_id: emotion
|
5 |
---
|
6 |
|
@@ -97,10 +115,10 @@ The data fields are the same among all splits.
|
|
97 |
|
98 |
### Data Splits
|
99 |
|
100 |
-
| name
|
101 |
-
|
102 |
-
|default|16000|
|
103 |
-
|emotion|16000|
|
104 |
|
105 |
## Dataset Creation
|
106 |
|
@@ -182,4 +200,4 @@ The data fields are the same among all splits.
|
|
182 |
|
183 |
### Contributions
|
184 |
|
185 |
-
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
|
|
1 |
---
|
2 |
+
pretty_name: Emotion
|
3 |
+
annotations_creators:
|
4 |
+
- machine-generated
|
5 |
+
language_creators:
|
6 |
+
- machine-generated
|
7 |
languages:
|
8 |
- en
|
9 |
+
licenses:
|
10 |
+
- unknown
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- text-classification
|
19 |
+
task_ids:
|
20 |
+
- multi-class-classification
|
21 |
+
- text-classification-other-emotion-classification
|
22 |
paperswithcode_id: emotion
|
23 |
---
|
24 |
|
|
|
115 |
|
116 |
### Data Splits
|
117 |
|
118 |
+
| name | train | validation | test |
|
119 |
+
| ------- | ----: | ---------: | ---: |
|
120 |
+
| default | 16000 | 2000 | 2000 |
|
121 |
+
| emotion | 16000 | 2000 | 2000 |
|
122 |
|
123 |
## Dataset Creation
|
124 |
|
|
|
200 |
|
201 |
### Contributions
|
202 |
|
203 |
+
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"
|
|
|
1 |
+
{"default": {"description": "Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.\n", "citation": "@inproceedings{saravia-etal-2018-carer,\n title = \"{CARER}: Contextualized Affect Representations for Emotion Recognition\",\n author = \"Saravia, Elvis and\n Liu, Hsien-Chi Toby and\n Huang, Yen-Hao and\n Wu, Junlin and\n Chen, Yi-Shin\",\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing\",\n month = oct # \"-\" # nov,\n year = \"2018\",\n address = \"Brussels, Belgium\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/D18-1404\",\n doi = \"10.18653/v1/D18-1404\",\n pages = \"3687--3697\",\n abstract = \"Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.\",\n}\n", "homepage": "https://github.com/dair-ai/emotion_dataset", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 6, "names": ["sadness", "joy", "love", "anger", "fear", "surprise"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "label"}, "task_templates": [{"task": "text-classification", "text_column": "text", "label_column": "label", "labels": ["anger", "fear", "joy", "love", "sadness", "surprise"]}], "builder_name": "emotion", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1741541, "num_examples": 16000, "dataset_name": "emotion"}, "validation": {"name": "validation", "num_bytes": 214699, "num_examples": 2000, "dataset_name": "emotion"}, "test": {"name": "test", "num_bytes": 217177, "num_examples": 2000, "dataset_name": "emotion"}}, "download_checksums": {"https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1": {"num_bytes": 1658616, "checksum": "3ab03d945a6cb783d818ccd06dafd52d2ed8b4f62f0f85a09d7d11870865b190"}, "https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1": {"num_bytes": 204240, "checksum": "34faaa31962fe63cdf5dbf6c132ef8ab166c640254ab991af78f3aea375e79ef"}, "https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1": {"num_bytes": 206760, "checksum": "60f531690d20127339e7f054edc299a82c627b5ec0dd5d552d53d544e0cfcc17"}}, "download_size": 2069616, "post_processing_size": null, "dataset_size": 2173417, "size_in_bytes": 4243033}}
|
emotion.py
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
import csv
|
2 |
|
3 |
import datasets
|
|
|
4 |
|
5 |
|
6 |
_CITATION = """\
|
@@ -44,6 +45,7 @@ class Emotion(datasets.GeneratorBasedBuilder):
|
|
44 |
supervised_keys=("text", "label"),
|
45 |
homepage=_URL,
|
46 |
citation=_CITATION,
|
|
|
47 |
)
|
48 |
|
49 |
def _split_generators(self, dl_manager):
|
|
|
1 |
import csv
|
2 |
|
3 |
import datasets
|
4 |
+
from datasets.tasks import TextClassification
|
5 |
|
6 |
|
7 |
_CITATION = """\
|
|
|
45 |
supervised_keys=("text", "label"),
|
46 |
homepage=_URL,
|
47 |
citation=_CITATION,
|
48 |
+
task_templates=[TextClassification(text_column="text", label_column="label")],
|
49 |
)
|
50 |
|
51 |
def _split_generators(self, dl_manager):
|