emotion / README.md
albertvillanova's picture
Add 'unsplit' config data files
4b41f73 verified
|
raw
history blame
9.05 kB
metadata
annotations_creators:
  - machine-generated
language_creators:
  - machine-generated
language:
  - en
license:
  - other
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-classification
task_ids:
  - multi-class-classification
paperswithcode_id: emotion
pretty_name: Emotion
tags:
  - emotion-classification
dataset_info:
  - config_name: split
    features:
      - name: text
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': sadness
              '1': joy
              '2': love
              '3': anger
              '4': fear
              '5': surprise
    splits:
      - name: train
        num_bytes: 1741533
        num_examples: 16000
      - name: validation
        num_bytes: 214695
        num_examples: 2000
      - name: test
        num_bytes: 217173
        num_examples: 2000
    download_size: 1287193
    dataset_size: 2173401
  - config_name: unsplit
    features:
      - name: text
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': sadness
              '1': joy
              '2': love
              '3': anger
              '4': fear
              '5': surprise
    splits:
      - name: train
        num_bytes: 45444017
        num_examples: 416809
    download_size: 26888538
    dataset_size: 45444017
configs:
  - config_name: split
    data_files:
      - split: train
        path: split/train-*
      - split: validation
        path: split/validation-*
      - split: test
        path: split/test-*
    default: true
  - config_name: unsplit
    data_files:
      - split: train
        path: unsplit/train-*
train-eval-index:
  - config: default
    task: text-classification
    task_id: multi_class_classification
    splits:
      train_split: train
      eval_split: test
    col_mapping:
      text: text
      label: target
    metrics:
      - type: accuracy
        name: Accuracy
      - type: f1
        name: F1 macro
        args:
          average: macro
      - type: f1
        name: F1 micro
        args:
          average: micro
      - type: f1
        name: F1 weighted
        args:
          average: weighted
      - type: precision
        name: Precision macro
        args:
          average: macro
      - type: precision
        name: Precision micro
        args:
          average: micro
      - type: precision
        name: Precision weighted
        args:
          average: weighted
      - type: recall
        name: Recall macro
        args:
          average: macro
      - type: recall
        name: Recall micro
        args:
          average: micro
      - type: recall
        name: Recall weighted
        args:
          average: weighted

Dataset Card for "emotion"

Table of Contents

Dataset Description

Dataset Summary

Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

An example looks as follows.

{
  "text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
  "label": 0
}

Data Fields

The data fields are:

  • text: a string feature.
  • label: a classification label, with possible values including sadness (0), joy (1), love (2), anger (3), fear (4), surprise (5).

Data Splits

The dataset has 2 configurations:

  • split: with a total of 20_000 examples split into train, validation and split
  • unsplit: with a total of 416_809 examples in a single train split
name train validation test
split 16000 2000 2000
unsplit 416809 n/a n/a

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

The dataset should be used for educational and research purposes only.

Citation Information

If you use this dataset, please cite:

@inproceedings{saravia-etal-2018-carer,
    title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
    author = "Saravia, Elvis  and
      Liu, Hsien-Chi Toby  and
      Huang, Yen-Hao  and
      Wu, Junlin  and
      Chen, Yi-Shin",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/D18-1404",
    doi = "10.18653/v1/D18-1404",
    pages = "3687--3697",
    abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}

Contributions

Thanks to @lhoestq, @thomwolf, @lewtun for adding this dataset.