parquet-converter commited on
Commit
ddb994f
1 Parent(s): c9c0c72

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,189 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets: []
15
- task_categories:
16
- - text-classification
17
- task_ids:
18
- - fact-checking
19
- pretty_name: RumourEval 2019
20
- tags:
21
- - stance-detection
22
- ---
23
-
24
-
25
-
26
- # Dataset Card for "rumoureval_2019"
27
-
28
- ## Table of Contents
29
- - [Dataset Description](#dataset-description)
30
- - [Dataset Summary](#dataset-summary)
31
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
- - [Languages](#languages)
33
- - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Fields](#data-fields)
36
- - [Data Splits](#data-splits)
37
- - [Dataset Creation](#dataset-creation)
38
- - [Curation Rationale](#curation-rationale)
39
- - [Source Data](#source-data)
40
- - [Annotations](#annotations)
41
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
- - [Considerations for Using the Data](#considerations-for-using-the-data)
43
- - [Social Impact of Dataset](#social-impact-of-dataset)
44
- - [Discussion of Biases](#discussion-of-biases)
45
- - [Other Known Limitations](#other-known-limitations)
46
- - [Additional Information](#additional-information)
47
- - [Dataset Curators](#dataset-curators)
48
- - [Licensing Information](#licensing-information)
49
- - [Citation Information](#citation-information)
50
- - [Contributions](#contributions)
51
-
52
- ## Dataset Description
53
-
54
- - **Homepage:** [https://competitions.codalab.org/competitions/19938](https://competitions.codalab.org/competitions/19938)
55
- - **Repository:** [https://figshare.com/articles/dataset/RumourEval_2019_data/8845580](https://figshare.com/articles/dataset/RumourEval_2019_data/8845580)
56
- - **Paper:** [https://aclanthology.org/S19-2147/](https://aclanthology.org/S19-2147/), [https://arxiv.org/abs/1809.06683](https://arxiv.org/abs/1809.06683)
57
- - **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
58
- - **Size of downloaded dataset files:**
59
- - **Size of the generated dataset:**
60
- - **Total amount of disk used:**
61
-
62
- ### Dataset Summary
63
-
64
- Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019.
65
-
66
- ### Supported Tasks and Leaderboards
67
-
68
- * SemEval 2019 task 1
69
-
70
- ### Languages
71
-
72
- English of various origins, bcp47: `en`
73
-
74
- ## Dataset Structure
75
-
76
- ### Data Instances
77
-
78
- #### polstance
79
-
80
- An example of 'train' looks as follows.
81
-
82
- ```
83
- {
84
- 'id': '0',
85
- 'source_text': 'Appalled by the attack on Charlie Hebdo in Paris, 10 - probably journalists - now confirmed dead. An attack on free speech everywhere.',
86
- 'reply_text': '@m33ryg @tnewtondunn @mehdirhasan Of course it is free speech, that\'s the definition of "free speech" to openly make comments or draw a pic!',
87
- 'label': 3
88
- }
89
- ```
90
-
91
-
92
- ### Data Fields
93
-
94
- - `id`: a `string` feature.
95
- - `source_text`: a `string` expressing a claim/topic.
96
- - `reply_text`: a `string` to be classified for its stance to the source.
97
- - `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
98
-
99
- ```
100
- 0: "support",
101
- 1: "deny",
102
- 2: "query",
103
- 3: "comment"
104
- ```
105
- - `quoteID`: a `string` of the internal quote ID.
106
- - `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
107
- - `politician`: a `string` naming the politician who uttered the quote.
108
-
109
- ### Data Splits
110
-
111
- | name |instances|
112
- |---------|----:|
113
- |train|7 005|
114
- |dev|2 425|
115
- |test|2 945|
116
-
117
- ## Dataset Creation
118
-
119
- ### Curation Rationale
120
-
121
-
122
- ### Source Data
123
-
124
- #### Initial Data Collection and Normalization
125
-
126
-
127
- #### Who are the source language producers?
128
-
129
- Twitter users
130
-
131
- ### Annotations
132
-
133
- #### Annotation process
134
-
135
- Detailed in [Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads](https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0150989)
136
-
137
- #### Who are the annotators?
138
-
139
-
140
- ### Personal and Sensitive Information
141
-
142
-
143
- ## Considerations for Using the Data
144
-
145
- ### Social Impact of Dataset
146
-
147
-
148
- ### Discussion of Biases
149
-
150
-
151
- ### Other Known Limitations
152
-
153
- ## Additional Information
154
-
155
- ### Dataset Curators
156
-
157
- The dataset is curated by the paper's authors.
158
-
159
- ### Licensing Information
160
-
161
- The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
162
-
163
- ### Citation Information
164
-
165
- ```
166
- @inproceedings{gorrell-etal-2019-semeval,
167
- title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours",
168
- author = "Gorrell, Genevieve and
169
- Kochkina, Elena and
170
- Liakata, Maria and
171
- Aker, Ahmet and
172
- Zubiaga, Arkaitz and
173
- Bontcheva, Kalina and
174
- Derczynski, Leon",
175
- booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
176
- month = jun,
177
- year = "2019",
178
- address = "Minneapolis, Minnesota, USA",
179
- publisher = "Association for Computational Linguistics",
180
- url = "https://aclanthology.org/S19-2147",
181
- doi = "10.18653/v1/S19-2147",
182
- pages = "845--854",
183
- }
184
- ```
185
-
186
-
187
- ### Contributions
188
-
189
- Author-added dataset [@leondz](https://github.com/leondz)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
RumourEval2019/rumoureval_2019-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9316e05e30f72be4f7de6d920fad08e5cb05e2c7fead0984288caa71e7abe6f
3
+ size 167579
RumourEval2019/rumoureval_2019-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d1c6e9a10619d1616f80ed81a59957f69c113dc90f249a56f518568cf364d3f
3
+ size 407661
RumourEval2019/rumoureval_2019-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6972cab631e5d15af05d5a754995df6d334c78b627f92e0d5ce78f7ef8a6c84e
3
+ size 160413
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"RumourEval2019": {"description": "This new dataset is designed to solve this great NLP task and is crafted with a lot of care.\n", "citation": "@InProceedings{huggingface:dataset,\ntitle = {A great new dataset},\nauthor={huggingface, Inc.\n},\nyear={2020}\n}\n", "homepage": "", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "source_text": {"dtype": "string", "id": null, "_type": "Value"}, "reply_text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 4, "names": ["support", "query", "deny", "comment"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "rumour_eval2019", "config_name": "RumourEval2019", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1242200, "num_examples": 4879, "dataset_name": "rumour_eval2019"}, "validation": {"name": "validation", "num_bytes": 412707, "num_examples": 1440, "dataset_name": "rumour_eval2019"}, "test": {"name": "test", "num_bytes": 491431, "num_examples": 1675, "dataset_name": "rumour_eval2019"}}, "download_checksums": {"rumoureval2019_train.csv": {"num_bytes": 1203917, "checksum": "134c036e34da708f0edb22b3cc688054d6395d1669eef78e4afa0fd9a4ed4c43"}, "rumoureval2019_val.csv": {"num_bytes": 402303, "checksum": "6cc859c2eff320ba002866e0b78f7e956b78d58e9e3a7843798b2dd9c23de201"}, "rumoureval2019_test.csv": {"num_bytes": 479250, "checksum": "7d103bfb55cdef3b0d26c481ceb772159ae824aa15bf26e8b26dc87a58c55508"}}, "download_size": 2085470, "post_processing_size": null, "dataset_size": 2146338, "size_in_bytes": 4231808}}
 
 
rumoureval2019_test.csv DELETED
The diff for this file is too large to render. See raw diff
 
rumoureval2019_train.csv DELETED
The diff for this file is too large to render. See raw diff
 
rumoureval2019_val.csv DELETED
The diff for this file is too large to render. See raw diff
 
rumoureval_2019.py DELETED
@@ -1,122 +0,0 @@
1
- # Copyright 2022 Mads Kongsbak and Leon Derczynski
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- # TODO: Address all TODOs and remove all explanatory comments
15
- """RumourEval 2019: Stance Prediction"""
16
-
17
-
18
- import csv
19
- import json
20
- import os
21
-
22
- import datasets
23
-
24
-
25
- # TODO: Add BibTeX citation
26
- # Find for instance the citation on arxiv or on the dataset repo/website
27
- _CITATION = """\
28
- @inproceedings{gorrell-etal-2019-semeval,
29
- title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours",
30
- author = "Gorrell, Genevieve and
31
- Kochkina, Elena and
32
- Liakata, Maria and
33
- Aker, Ahmet and
34
- Zubiaga, Arkaitz and
35
- Bontcheva, Kalina and
36
- Derczynski, Leon",
37
- booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
38
- month = jun,
39
- year = "2019",
40
- address = "Minneapolis, Minnesota, USA",
41
- publisher = "Association for Computational Linguistics",
42
- url = "https://aclanthology.org/S19-2147",
43
- doi = "10.18653/v1/S19-2147",
44
- pages = "845--854",
45
- }
46
-
47
- """
48
-
49
- # TODO: Add description of the dataset here
50
- # You can copy an official description
51
- _DESCRIPTION = """\
52
-
53
- Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019.
54
- """
55
-
56
- # TODO: Add a link to an official homepage for the dataset here
57
- _HOMEPAGE = ""
58
-
59
- # TODO: Add the licence for the dataset here if you can find it
60
- _LICENSE = "cc-by-4.0"
61
-
62
- class RumourEval2019Config(datasets.BuilderConfig):
63
-
64
- def __init__(self, **kwargs):
65
- super(RumourEval2019Config, self).__init__(**kwargs)
66
-
67
- class RumourEval2019(datasets.GeneratorBasedBuilder):
68
- """RumourEval2019 Stance Detection Dataset formatted in triples of (source_text, reply_text, label)"""
69
-
70
- VERSION = datasets.Version("1.0.0")
71
-
72
- BUILDER_CONFIGS = [
73
- RumourEval2019Config(name="RumourEval2019", version=VERSION, description="Stance Detection Dataset"),
74
- ]
75
-
76
- def _info(self):
77
- features = datasets.Features(
78
- {
79
- "id": datasets.Value("string"),
80
- "source_text": datasets.Value("string"),
81
- "reply_text": datasets.Value("string"),
82
- "label": datasets.features.ClassLabel(
83
- names=[
84
- "support",
85
- "deny",
86
- "query",
87
- "comment"
88
- ]
89
- )
90
- }
91
- )
92
-
93
- return datasets.DatasetInfo(
94
- description=_DESCRIPTION,
95
- features=features,
96
- homepage=_HOMEPAGE,
97
- license=_LICENSE,
98
- citation=_CITATION,
99
- )
100
-
101
- def _split_generators(self, dl_manager):
102
- train_text = dl_manager.download_and_extract("rumoureval2019_train.csv")
103
- validation_text = dl_manager.download_and_extract("rumoureval2019_val.csv")
104
- test_text = dl_manager.download_and_extract("rumoureval2019_test.csv")
105
-
106
- return [
107
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_text, "split": "train"}),
108
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": validation_text, "split": "validation"}),
109
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_text, "split": "test"}),
110
- ]
111
-
112
- def _generate_examples(self, filepath, split):
113
- with open(filepath, encoding="utf-8") as f:
114
- reader = csv.DictReader(f, delimiter=",")
115
- guid = 0
116
- for instance in reader:
117
- instance["source_text"] = instance.pop("source_text")
118
- instance["reply_text"] = instance.pop("reply_text")
119
- instance["label"] = instance.pop("label")
120
- instance['id'] = str(guid)
121
- yield guid, instance
122
- guid += 1