Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,166 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
- machine-generated
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
language_creators:
|
8 |
+
- crowdsourced
|
9 |
+
license:
|
10 |
+
- mit
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
paperswithcode_id: null
|
14 |
+
pretty_name: SPICED
|
15 |
+
size_categories:
|
16 |
+
- 1K<n<10K
|
17 |
+
source_datasets:
|
18 |
+
- extended|s2orc
|
19 |
+
tags: []
|
20 |
+
task_categories:
|
21 |
+
- text-classification
|
22 |
+
task_ids:
|
23 |
+
- text-scoring
|
24 |
+
- semantic-similarity-scoring
|
25 |
---
|
26 |
+
|
27 |
+
# Dataset Card for SPICED
|
28 |
+
|
29 |
+
## Table of Contents
|
30 |
+
- [Table of Contents](#table-of-contents)
|
31 |
+
- [Dataset Description](#dataset-description)
|
32 |
+
- [Dataset Summary](#dataset-summary)
|
33 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
34 |
+
- [Languages](#languages)
|
35 |
+
- [Dataset Structure](#dataset-structure)
|
36 |
+
- [Data Instances](#data-instances)
|
37 |
+
- [Data Fields](#data-fields)
|
38 |
+
- [Data Splits](#data-splits)
|
39 |
+
- [Dataset Creation](#dataset-creation)
|
40 |
+
- [Curation Rationale](#curation-rationale)
|
41 |
+
- [Source Data](#source-data)
|
42 |
+
- [Annotations](#annotations)
|
43 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
44 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
45 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
46 |
+
- [Discussion of Biases](#discussion-of-biases)
|
47 |
+
- [Other Known Limitations](#other-known-limitations)
|
48 |
+
- [Additional Information](#additional-information)
|
49 |
+
- [Dataset Curators](#dataset-curators)
|
50 |
+
- [Licensing Information](#licensing-information)
|
51 |
+
- [Citation Information](#citation-information)
|
52 |
+
- [Contributions](#contributions)
|
53 |
+
|
54 |
+
## Dataset Description
|
55 |
+
|
56 |
+
- **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
|
57 |
+
- **Repository:** https://github.com/copenlu/scientific-information-change
|
58 |
+
- **Paper:**
|
59 |
+
|
60 |
+
### Dataset Summary
|
61 |
+
|
62 |
+
The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
|
63 |
+
|
64 |
+
```
|
65 |
+
@article{modeling-information-change,
|
66 |
+
title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
|
67 |
+
author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
|
68 |
+
year={2022},
|
69 |
+
booktitle = {Proceedings of EMNLP},
|
70 |
+
publisher = {Association for Computational Linguistics},
|
71 |
+
year = 2022
|
72 |
+
}
|
73 |
+
```
|
74 |
+
|
75 |
+
### Supported Tasks and Leaderboards
|
76 |
+
|
77 |
+
The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
|
78 |
+
|
79 |
+
### Languages
|
80 |
+
|
81 |
+
English
|
82 |
+
|
83 |
+
## Dataset Structure
|
84 |
+
|
85 |
+
### Data Fields
|
86 |
+
|
87 |
+
- DOI: The DOI of the original scientific article
|
88 |
+
- instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
|
89 |
+
- News Finding: Text of the news or tweet finding
|
90 |
+
- Paper Finding: Text of the paper finding
|
91 |
+
- News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
|
92 |
+
- Paper Context: The surrounding two sentences for the paper finding
|
93 |
+
- scores: Annotator scores after removing low competence annotators
|
94 |
+
- field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
|
95 |
+
- split: The dataset split ('train', 'val', or 'test')
|
96 |
+
- final\_score: The IMS of the instance
|
97 |
+
- source: Either "news" or "tweet"
|
98 |
+
- News Url: A URL to the source article if a news instance or the tweet ID of a tweet
|
99 |
+
|
100 |
+
### Data Splits
|
101 |
+
|
102 |
+
- train: 4721 instances
|
103 |
+
- validation: 664 instances
|
104 |
+
- test: 640 instances
|
105 |
+
|
106 |
+
## Dataset Creation
|
107 |
+
|
108 |
+
### Curation Rationale
|
109 |
+
|
110 |
+
|
111 |
+
|
112 |
+
### Source Data
|
113 |
+
|
114 |
+
#### Initial Data Collection and Normalization
|
115 |
+
|
116 |
+
[More Information Needed]
|
117 |
+
|
118 |
+
#### Who are the source language producers?
|
119 |
+
|
120 |
+
[More Information Needed]
|
121 |
+
|
122 |
+
### Annotations
|
123 |
+
|
124 |
+
#### Annotation process
|
125 |
+
|
126 |
+
[More Information Needed]
|
127 |
+
|
128 |
+
#### Who are the annotators?
|
129 |
+
|
130 |
+
[More Information Needed]
|
131 |
+
|
132 |
+
### Personal and Sensitive Information
|
133 |
+
|
134 |
+
[More Information Needed]
|
135 |
+
|
136 |
+
## Considerations for Using the Data
|
137 |
+
|
138 |
+
### Social Impact of Dataset
|
139 |
+
|
140 |
+
[More Information Needed]
|
141 |
+
|
142 |
+
### Discussion of Biases
|
143 |
+
|
144 |
+
[More Information Needed]
|
145 |
+
|
146 |
+
### Other Known Limitations
|
147 |
+
|
148 |
+
[More Information Needed]
|
149 |
+
|
150 |
+
## Additional Information
|
151 |
+
|
152 |
+
### Dataset Curators
|
153 |
+
|
154 |
+
[More Information Needed]
|
155 |
+
|
156 |
+
### Licensing Information
|
157 |
+
|
158 |
+
[More Information Needed]
|
159 |
+
|
160 |
+
### Citation Information
|
161 |
+
|
162 |
+
[More Information Needed]
|
163 |
+
|
164 |
+
### Contributions
|
165 |
+
|
166 |
+
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset.
|