parquet-converter commited on
Commit
a3a8182
1 Parent(s): aa1f981

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,53 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tflite filter=lfs diff=lfs merge=lfs -text
29
- *.tgz filter=lfs diff=lfs merge=lfs -text
30
- *.wasm filter=lfs diff=lfs merge=lfs -text
31
- *.xz filter=lfs diff=lfs merge=lfs -text
32
- *.zip filter=lfs diff=lfs merge=lfs -text
33
- *.zst filter=lfs diff=lfs merge=lfs -text
34
- *tfevents* filter=lfs diff=lfs merge=lfs -text
35
- # Audio files - uncompressed
36
- *.pcm filter=lfs diff=lfs merge=lfs -text
37
- *.sam filter=lfs diff=lfs merge=lfs -text
38
- *.raw filter=lfs diff=lfs merge=lfs -text
39
- # Audio files - compressed
40
- *.aac filter=lfs diff=lfs merge=lfs -text
41
- *.flac filter=lfs diff=lfs merge=lfs -text
42
- *.mp3 filter=lfs diff=lfs merge=lfs -text
43
- *.ogg filter=lfs diff=lfs merge=lfs -text
44
- *.wav filter=lfs diff=lfs merge=lfs -text
45
- # Image files - uncompressed
46
- *.bmp filter=lfs diff=lfs merge=lfs -text
47
- *.gif filter=lfs diff=lfs merge=lfs -text
48
- *.png filter=lfs diff=lfs merge=lfs -text
49
- *.tiff filter=lfs diff=lfs merge=lfs -text
50
- # Image files - compressed
51
- *.jpg filter=lfs diff=lfs merge=lfs -text
52
- *.jpeg filter=lfs diff=lfs merge=lfs -text
53
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,170 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- - machine-generated
5
- language:
6
- - en
7
- language_creators:
8
- - found
9
- license:
10
- - mit
11
- multilinguality:
12
- - monolingual
13
- paperswithcode_id: null
14
- pretty_name: SPICED
15
- size_categories:
16
- - 1K<n<10K
17
- source_datasets:
18
- - extended|s2orc
19
- tags:
20
- - scientific text
21
- - scholarly text
22
- - semantic text similarity
23
- - fact checking
24
- - misinformation
25
- task_categories:
26
- - text-classification
27
- task_ids:
28
- - text-scoring
29
- - semantic-similarity-scoring
30
- ---
31
-
32
- # Dataset Card for SPICED
33
-
34
- ## Table of Contents
35
- - [Table of Contents](#table-of-contents)
36
- - [Dataset Description](#dataset-description)
37
- - [Dataset Summary](#dataset-summary)
38
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
39
- - [Languages](#languages)
40
- - [Dataset Structure](#dataset-structure)
41
- - [Data Instances](#data-instances)
42
- - [Data Fields](#data-fields)
43
- - [Data Splits](#data-splits)
44
- - [Dataset Creation](#dataset-creation)
45
- - [Curation Rationale](#curation-rationale)
46
- - [Source Data](#source-data)
47
- - [Annotations](#annotations)
48
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
49
- - [Considerations for Using the Data](#considerations-for-using-the-data)
50
- - [Social Impact of Dataset](#social-impact-of-dataset)
51
- - [Discussion of Biases](#discussion-of-biases)
52
- - [Other Known Limitations](#other-known-limitations)
53
- - [Additional Information](#additional-information)
54
- - [Dataset Curators](#dataset-curators)
55
- - [Licensing Information](#licensing-information)
56
- - [Contributions](#contributions)
57
-
58
- ## Dataset Description
59
-
60
- - **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
61
- - **Repository:** https://github.com/copenlu/scientific-information-change
62
- - **Paper:**
63
-
64
- ### Dataset Summary
65
-
66
- The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
67
-
68
- ```
69
- @article{modeling-information-change,
70
- title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
71
- author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
72
- year={2022},
73
- booktitle = {Proceedings of EMNLP},
74
- publisher = {Association for Computational Linguistics},
75
- year = 2022
76
- }
77
- ```
78
-
79
- ### Supported Tasks and Leaderboards
80
-
81
- The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
82
-
83
- ### Languages
84
-
85
- English
86
-
87
- ## Dataset Structure
88
-
89
- ### Data Fields
90
-
91
- - DOI: The DOI of the original scientific article
92
- - instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
93
- - News Finding: Text of the news or tweet finding
94
- - Paper Finding: Text of the paper finding
95
- - News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
96
- - Paper Context: The surrounding two sentences for the paper finding
97
- - scores: Annotator scores after removing low competence annotators
98
- - field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
99
- - split: The dataset split ('train', 'val', or 'test')
100
- - final\_score: The IMS of the instance
101
- - source: Either "news" or "tweet"
102
- - News Url: A URL to the source article if a news instance or the tweet ID of a tweet
103
-
104
- ### Data Splits
105
-
106
- - train: 4721 instances
107
- - validation: 664 instances
108
- - test: 640 instances
109
-
110
- ## Dataset Creation
111
-
112
- For the full details of how the dataset was created, please refer to our [EMNLP 2022 paper]().
113
-
114
- ### Curation Rationale
115
-
116
- Science communication is a complex process of translation from highly technical scientific language to common language that lay people can understand. At the same time, the general public relies on good science communication in order to inform critical decisions about their health and behavior. SPICED was curated in order to provide a training dataset and benchmark for machine learning models to measure changes in scientific information at different stages of the science communication pipeline.
117
-
118
- ### Source Data
119
-
120
- #### Initial Data Collection and Normalization
121
-
122
- Scientific text: S2ORC
123
-
124
- News articles and Tweets are collected through Altmetric.
125
-
126
- #### Who are the source language producers?
127
-
128
- Scientists, journalists, and Twitter users.
129
-
130
- ### Annotations
131
-
132
- #### Annotation process
133
-
134
- [More Information Needed]
135
-
136
- #### Who are the annotators?
137
-
138
- [More Information Needed]
139
-
140
- ### Personal and Sensitive Information
141
-
142
- [More Information Needed]
143
-
144
- ## Considerations for Using the Data
145
-
146
- ### Social Impact of Dataset
147
-
148
- Models trained on SPICED can be used to perform large scale analyses of science communication. They can be used to match the same finding discussed in different media, and reveal trends in differences in reporting at different stages of the science communication pipeline. It is hoped that this can help to build tools which will improve science communication.
149
-
150
- ### Discussion of Biases
151
-
152
- The dataset is restricted to computer science, medicine, biology, and psychology, which may introduce some bias in the topics which models will perform well on.
153
-
154
- ### Other Known Limitations
155
-
156
- While some context is available, we do not release the full text of news articles and scientific papers, which may contain further context to help with learning the task. We do however provide the paper DOIs and links to the original news articles in case full text is desired.
157
-
158
- ## Additional Information
159
-
160
- ### Dataset Curators
161
-
162
- Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein
163
-
164
- ### Licensing Information
165
-
166
- MIT
167
-
168
- ### Contributions
169
-
170
- Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
copenlu--spiced/csv-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4347c7462d3426585548aae0f5a4aeec58c6f1b4555b53596f32b8ed17b2fd2e
3
+ size 368358
copenlu--spiced/csv-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3e2f4b24995d213b03e832d9fa54962d9290a99680935da676c9ffc9c05fd52
3
+ size 3108838
copenlu--spiced/csv-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7699fbd0c12bfc0d955571d70667289dbe110acee809da4dca39808da88392ba
3
+ size 374376
dev.csv DELETED
The diff for this file is too large to render. See raw diff
 
test.csv DELETED
The diff for this file is too large to render. See raw diff
 
train.csv DELETED
The diff for this file is too large to render. See raw diff