File size: 3,229 Bytes
4e5a5d4
 
 
 
 
 
 
 
74a1a0f
 
4e5a5d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
task_categories:
- text-classification
language:
- en
---
# Movie Review Data

* Original source: sentence polarity dataset v1.0 http://www.cs.cornell.edu/people/pabo/movie-review-data/
* Seems to same as https://huggingface.co./datasets/rotten_tomatoes, but different split.

## Original README

=======

Introduction

This README v1.0 (June, 2005) for the v1.0 sentence polarity dataset comes
from the URL
http://www.cs.cornell.edu/people/pabo/movie-review-data .

=======

Citation Info 

This data was first used in Bo Pang and Lillian Lee,
``Seeing stars: Exploiting class relationships for sentiment categorization
with respect to rating scales.'', Proceedings of the ACL, 2005.
  
@InProceedings{Pang+Lee:05a,
  author =       {Bo Pang and Lillian Lee},
  title =        {Seeing stars: Exploiting class relationships for sentiment
                  categorization with respect to rating scales},
  booktitle =    {Proceedings of the ACL},
  year =         2005
}

=======

Data Format Summary 

- rt-polaritydata.tar.gz: contains this readme and two data files that
  were used in the experiments described in Pang/Lee ACL 2005.

  Specifically: 
  * rt-polarity.pos contains 5331 positive snippets
  * rt-polarity.neg contains 5331 negative snippets

  Each line in these two files corresponds to a single snippet (usually
  containing roughly one single sentence); all snippets are down-cased.  
  The snippets were labeled automatically, as described below (see 
  section "Label Decision").

  Note: The original source files from which the data in
  rt-polaritydata.tar.gz was derived can be found in the subjective
  part (Rotten Tomatoes pages) of subjectivity_html.tar.gz (released 
  with subjectivity dataset v1.0).

   
=======

Label Decision 

We assumed snippets (from Rotten Tomatoes webpages) for reviews marked with 
``fresh'' are positive, and those for reviews marked with ``rotten'' are
negative.

## Preprocessing

To make csv with text and label field, we use the following script.

```python3
import csv
import random

# NOTE: The encoding of original file is "latin_1". We will change it to "utf8".
with open("rt-polarity.pos", encoding="latin_1") as f:
    texts_pos = [line.strip() for line in f]
with open("rt-polarity.neg", encoding="latin_1") as f:
    texts_neg = [line.strip() for line in f]

rows_pos = [{"text": text, "label": 1} for text in texts_pos]
rows_neg = [{"text": text, "label": 0} for text in texts_pos]

# NOTE: For fair validation, we split it into train and test. Also, for the research who wants to use different setting, we provide whole setting.
# NOTE: We follow the split setting in LM-BFF paper.
rows_whole = rows_pos + rows_neg
random.Random(42).shuffle(rows_whole)
rows_test, rows_train = rows_whole[:2000], rows_whole[2000:]

with open("whole.csv", "w", encoding="utf8") as f:
    writer = csv.DictWriter(f, fieldnames=["text", "label"])
    writer.writerows(rows_train)
with open("train.csv", "w", encoding="utf8") as f:
    writer = csv.DictWriter(f, fieldnames=["text", "label"])
    writer.writerows(rows_train)
with open("test.csv", "w", encoding="utf8") as f:
    writer = csv.DictWriter(f, fieldnames=["text", "label"])
    writer.writerows(rows_test)
```