jrfish commited on
Commit
b731595
1 Parent(s): 6d79e66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -22
README.md CHANGED
@@ -1,7 +1,3 @@
1
- Citation:
2
-
3
-
4
- ---
5
  annotations_creators:
6
  - found
7
  language:
@@ -30,38 +26,24 @@ task_categories:
30
  - other
31
  task_ids:
32
  - multi-class-classification
33
- - multi-label-classification---
34
 
35
- # Dataset Card for [Dataset Name]
36
 
37
  ## Table of Contents
38
  - [Table of Contents](#table-of-contents)
39
  - [Dataset Description](#dataset-description)
40
-
41
  - [Dataset Summary](#dataset-summary)
42
-
43
  - [Languages](#languages)
44
- English
45
  - [Dataset Structure](#dataset-structure)
46
-
47
  - [Data Instances](#data-instances)
48
- - [Data Fields](#data-fields)
49
  - [Data Splits](#data-splits)
50
  - [Dataset Creation](#dataset-creation)
51
  - [Curation Rationale](#curation-rationale)
52
  - [Source Data](#source-data)
53
- - [Annotations](#annotations)
54
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
55
- - [Considerations for Using the Data](#considerations-for-using-the-data)
56
- - [Social Impact of Dataset](#social-impact-of-dataset)
57
- - [Discussion of Biases](#discussion-of-biases)
58
- - [Other Known Limitations](#other-known-limitations)
59
  - [Additional Information](#additional-information)
60
- - [Dataset Curators](#dataset-curators)
61
- - [Licensing Information](#licensing-information)
62
  - [Citation Information](#citation-information)
63
  - [Contributions](#contributions)
64
-
65
  ## Dataset Description
66
  AUTHORMIX, was originally created for authorship obfuscation task and had data from four distinct domains: presidential speeches, early-1900s fiction novels, scholarly articles, and diary-style blogs. Altogether, AUTHORMIX contains over 30k high-quality paragraphs from 14 authors.
67
 
@@ -94,12 +76,12 @@ No preset data splits available.
94
  ## Dataset Creation
95
  For the presidential domain, we curate and clean a novel collection of high-quality presidential speeches from George W. Bush (n=38), Barack Obama (n=29), and Donald Trump (n=26), transcribed by the Miller Center (https://data.millercenter.org) at the University of Virginia. We broke the speeches naturally into paragraphs and then selected all paragraphs between 2-5 sentences. This resulted in a total of n= 13K paragraphs.
96
 
97
- Similarly, we also decided to develop a new collection of early 1900s fiction writers from the with strong writing styles, therefore we choose text from books by Ernest Hemingway, F. Scott Fitzgerald, and Virginia Woolf which were collected from Project Gutenbert (https://www.gutenberg.org). We selected the top 4 most popular books on Project Gutenberg for each author and then again, used the natural paragraphs from each author. We selected all paragraphs between 2-5 sentences. This resulted in a total of n= 9K paragraphs.
98
 
99
  Lastly, we altered the existing data from two current datasets, the Extended-Brennan Greenstad which is a collection of "scholarly" short (500-word) paragraphs gathered from Amazon Mechanical Turk (AMT) and the Blog Authorship corpus, a collection of blogs (diary-style entries) that were posted to blog.com. For the AMT dataset, we used authors "h", "pp", and "qq" and we artificially created paragraphs by chunking the text into a random collection of 2-5 sentences (as the text is not naturally broken into paragraphs). For the Blog dataset, we used authors "5546", "11518", "25872", "30102", "30407", we used the natural paragraphs. Then, to match the speech and novel domains, we edited to include all paragraphs between 2-5 sentences and at least 3 words. This resulted in n= 500 and n= 8K paragraphs for the AMT and Blog accordingly.
100
  ### Curation Rationale
101
 
102
- This dataset was creatd specifically for authorship obfuscation. We wanted to create a diverse high-quality dataset, which had domains of writings with similar topics but strong stylstic writing.
103
 
104
  ### Source Data
105
  Miller Center (https://data.millercenter.org)
 
 
 
 
 
1
  annotations_creators:
2
  - found
3
  language:
 
26
  - other
27
  task_ids:
28
  - multi-class-classification
29
+ - multi-label-classification
30
 
31
+ # Dataset Card for [AuthorMix]
32
 
33
  ## Table of Contents
34
  - [Table of Contents](#table-of-contents)
35
  - [Dataset Description](#dataset-description)
 
36
  - [Dataset Summary](#dataset-summary)
 
37
  - [Languages](#languages)
 
38
  - [Dataset Structure](#dataset-structure)
 
39
  - [Data Instances](#data-instances)
 
40
  - [Data Splits](#data-splits)
41
  - [Dataset Creation](#dataset-creation)
42
  - [Curation Rationale](#curation-rationale)
43
  - [Source Data](#source-data)
 
 
 
 
 
 
44
  - [Additional Information](#additional-information)
 
 
45
  - [Citation Information](#citation-information)
46
  - [Contributions](#contributions)
 
47
  ## Dataset Description
48
  AUTHORMIX, was originally created for authorship obfuscation task and had data from four distinct domains: presidential speeches, early-1900s fiction novels, scholarly articles, and diary-style blogs. Altogether, AUTHORMIX contains over 30k high-quality paragraphs from 14 authors.
49
 
 
76
  ## Dataset Creation
77
  For the presidential domain, we curate and clean a novel collection of high-quality presidential speeches from George W. Bush (n=38), Barack Obama (n=29), and Donald Trump (n=26), transcribed by the Miller Center (https://data.millercenter.org) at the University of Virginia. We broke the speeches naturally into paragraphs and then selected all paragraphs between 2-5 sentences. This resulted in a total of n= 13K paragraphs.
78
 
79
+ Similarly, we also decided to develop a new collection of early 1900s fiction writers from the with strong writing styles, therefore we choose text from books by Ernest Hemingway, F. Scott Fitzgerald, and Virginia Woolf which were collected from Project Gutenberg (https://www.gutenberg.org). We selected the top 4 most popular books on Project Gutenberg for each author and then again, used the natural paragraphs from each author. We selected all paragraphs between 2-5 sentences. This resulted in a total of n= 9K paragraphs.
80
 
81
  Lastly, we altered the existing data from two current datasets, the Extended-Brennan Greenstad which is a collection of "scholarly" short (500-word) paragraphs gathered from Amazon Mechanical Turk (AMT) and the Blog Authorship corpus, a collection of blogs (diary-style entries) that were posted to blog.com. For the AMT dataset, we used authors "h", "pp", and "qq" and we artificially created paragraphs by chunking the text into a random collection of 2-5 sentences (as the text is not naturally broken into paragraphs). For the Blog dataset, we used authors "5546", "11518", "25872", "30102", "30407", we used the natural paragraphs. Then, to match the speech and novel domains, we edited to include all paragraphs between 2-5 sentences and at least 3 words. This resulted in n= 500 and n= 8K paragraphs for the AMT and Blog accordingly.
82
  ### Curation Rationale
83
 
84
+ This dataset was created specifically for authorship obfuscation. We wanted to create a diverse high-quality dataset, which had domains of writings with similar topics but strong stylstic writing.
85
 
86
  ### Source Data
87
  Miller Center (https://data.millercenter.org)