Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
hate-speech-detection
Languages:
Romanian
Size:
1K - 10K
Tags:
hate-speech-detection
License:
Update README.md
Browse files
README.md
CHANGED
@@ -51,58 +51,29 @@ extra_gated_prompt: "Warning: this repository contains harmful content (abusive
|
|
51 |
|
52 |
## Dataset Description
|
53 |
|
54 |
-
- **Homepage:** [https://
|
55 |
- **Repository:**
|
56 |
-
- **Paper:**
|
57 |
-
- **Point of Contact:** [
|
58 |
|
59 |
### Dataset Summary
|
60 |
|
61 |
-
|
62 |
-
|
63 |
-
* Arabic
|
64 |
-
* Danish
|
65 |
-
* English
|
66 |
-
* Greek
|
67 |
-
* Turkish
|
68 |
-
|
69 |
-
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
|
70 |
-
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
|
71 |
-
The following sub-tasks were organized:
|
72 |
-
|
73 |
-
* Sub-task A - Offensive language identification;
|
74 |
-
* Sub-task B - Automatic categorization of offense types;
|
75 |
-
* Sub-task C - Offense target identification.
|
76 |
-
|
77 |
-
English training data is omitted so needs to be collected otherwise (see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp))
|
78 |
-
|
79 |
-
The source datasets come from:
|
80 |
-
|
81 |
-
* Arabic [https://arxiv.org/pdf/2004.02192.pdf](https://arxiv.org/pdf/2004.02192.pdf), [https://aclanthology.org/2021.wanlp-1.13/](https://aclanthology.org/2021.wanlp-1.13/)
|
82 |
-
* Danish [https://arxiv.org/pdf/1908.04531.pdf](https://arxiv.org/pdf/1908.04531.pdf), [https://aclanthology.org/2020.lrec-1.430/?ref=https://githubhelp.com](https://aclanthology.org/2020.lrec-1.430/)
|
83 |
-
* English [https://arxiv.org/pdf/2004.14454.pdf](https://arxiv.org/pdf/2004.14454.pdf), [https://aclanthology.org/2021.findings-acl.80.pdf](https://aclanthology.org/2021.findings-acl.80.pdf)
|
84 |
-
* Greek [https://arxiv.org/pdf/2003.07459.pdf](https://arxiv.org/pdf/2003.07459.pdf), [https://aclanthology.org/2020.lrec-1.629/](https://aclanthology.org/2020.lrec-1.629/)
|
85 |
-
* Turkish [https://aclanthology.org/2020.lrec-1.758/](https://aclanthology.org/2020.lrec-1.758/)
|
86 |
-
|
87 |
-
### Supported Tasks and Leaderboards
|
88 |
-
|
89 |
-
* [OffensEval 2020](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
|
90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
### Languages
|
92 |
|
93 |
-
|
94 |
|
95 |
## Dataset Structure
|
96 |
|
97 |
-
There are five named configs, one per language:
|
98 |
-
|
99 |
-
* `ar` Arabic
|
100 |
-
* `da` Danish
|
101 |
-
* `en` English
|
102 |
-
* `gr` Greek
|
103 |
-
* `tr` Turkish
|
104 |
-
|
105 |
-
The training data for English is absent - this is 9M tweets that need to be rehydrated on their own. See [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)
|
106 |
|
107 |
### Data Instances
|
108 |
|
@@ -111,41 +82,42 @@ An example of 'train' looks as follows.
|
|
111 |
|
112 |
```
|
113 |
{
|
114 |
-
'
|
|
|
115 |
'text': 'PLACEHOLDER TEXT',
|
116 |
-
'
|
117 |
}
|
118 |
```
|
119 |
|
120 |
|
121 |
### Data Fields
|
122 |
|
123 |
-
- `
|
|
|
124 |
- `text`: a `string`.
|
125 |
-
- `
|
126 |
|
127 |
|
128 |
### Data Splits
|
129 |
|
130 |
| name |train|test|
|
131 |
|---------|----:|---:|
|
132 |
-
|
|
133 |
-
|
134 |
-
|en|0|3887|
|
135 |
-
|gr|8743|1544|
|
136 |
-
|tr|31277|3515|
|
137 |
|
138 |
## Dataset Creation
|
139 |
|
140 |
### Curation Rationale
|
141 |
|
142 |
-
Collecting data for abusive language classification
|
143 |
|
144 |
### Source Data
|
145 |
|
|
|
|
|
146 |
#### Initial Data Collection and Normalization
|
147 |
|
148 |
-
|
149 |
|
150 |
#### Who are the source language producers?
|
151 |
|
@@ -155,11 +127,11 @@ Social media users
|
|
155 |
|
156 |
#### Annotation process
|
157 |
|
158 |
-
|
159 |
|
160 |
#### Who are the annotators?
|
161 |
|
162 |
-
|
163 |
|
164 |
### Personal and Sensitive Information
|
165 |
|
@@ -181,46 +153,18 @@ The data definitely contains abusive language. The data could be used to develop
|
|
181 |
|
182 |
### Dataset Curators
|
183 |
|
184 |
-
The datasets is curated by each sub-part's paper authors.
|
185 |
|
186 |
### Licensing Information
|
187 |
|
188 |
-
This data is available and distributed under
|
189 |
|
190 |
### Citation Information
|
191 |
|
192 |
```
|
193 |
-
|
194 |
-
title = "{S}em{E}val-2020 Task 12: Multilingual Offensive Language Identification in Social Media ({O}ffens{E}val 2020)",
|
195 |
-
author = {Zampieri, Marcos and
|
196 |
-
Nakov, Preslav and
|
197 |
-
Rosenthal, Sara and
|
198 |
-
Atanasova, Pepa and
|
199 |
-
Karadzhov, Georgi and
|
200 |
-
Mubarak, Hamdy and
|
201 |
-
Derczynski, Leon and
|
202 |
-
Pitenis, Zeses and
|
203 |
-
{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}},
|
204 |
-
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
|
205 |
-
month = dec,
|
206 |
-
year = "2020",
|
207 |
-
address = "Barcelona (online)",
|
208 |
-
publisher = "International Committee for Computational Linguistics",
|
209 |
-
url = "https://aclanthology.org/2020.semeval-1.188",
|
210 |
-
doi = "10.18653/v1/2020.semeval-1.188",
|
211 |
-
pages = "1425--1447",
|
212 |
-
abstract = "We present the results and the main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted official runs on the test data, and 70 teams submitted system description papers.",
|
213 |
-
}
|
214 |
|
215 |
```
|
216 |
|
217 |
|
218 |
### Contributions
|
219 |
|
220 |
-
Author-added dataset [@leondz](https://github.com/leondz)
|
221 |
-
|
222 |
-
|
223 |
-
---
|
224 |
-
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts
|
225 |
-
|
226 |
-
---
|
|
|
51 |
|
52 |
## Dataset Description
|
53 |
|
54 |
+
- **Homepage:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
|
55 |
- **Repository:**
|
56 |
+
- **Paper:** To be announced
|
57 |
+
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
|
58 |
|
59 |
### Dataset Summary
|
60 |
|
61 |
+
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
|
63 |
+
The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.
|
64 |
+
The following Classes are available:
|
65 |
+
* OTHER: Non-Offensive Language
|
66 |
+
* OFFENSIVE:
|
67 |
+
- PROFANITY
|
68 |
+
- INSULT
|
69 |
+
- ABUSE
|
70 |
+
|
71 |
### Languages
|
72 |
|
73 |
+
Romanian
|
74 |
|
75 |
## Dataset Structure
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
### Data Instances
|
79 |
|
|
|
82 |
|
83 |
```
|
84 |
{
|
85 |
+
'sender': '$USER1208',
|
86 |
+
'no_reacts': 1,
|
87 |
'text': 'PLACEHOLDER TEXT',
|
88 |
+
'label': OTHER,
|
89 |
}
|
90 |
```
|
91 |
|
92 |
|
93 |
### Data Fields
|
94 |
|
95 |
+
- `sender`: a `string` feature.
|
96 |
+
- 'no_reacts': a `integer`
|
97 |
- `text`: a `string`.
|
98 |
+
- `label`: categorical `OTHER`, `PROFANITY`, `INSULT`, `ABUSE`
|
99 |
|
100 |
|
101 |
### Data Splits
|
102 |
|
103 |
| name |train|test|
|
104 |
|---------|----:|---:|
|
105 |
+
|ro|x|x|
|
106 |
+
|
|
|
|
|
|
|
107 |
|
108 |
## Dataset Creation
|
109 |
|
110 |
### Curation Rationale
|
111 |
|
112 |
+
Collecting data for abusive language classification for Romanian Language.
|
113 |
|
114 |
### Source Data
|
115 |
|
116 |
+
Facebook comments
|
117 |
+
|
118 |
#### Initial Data Collection and Normalization
|
119 |
|
120 |
+
|
121 |
|
122 |
#### Who are the source language producers?
|
123 |
|
|
|
127 |
|
128 |
#### Annotation process
|
129 |
|
130 |
+
|
131 |
|
132 |
#### Who are the annotators?
|
133 |
|
134 |
+
Native speakers
|
135 |
|
136 |
### Personal and Sensitive Information
|
137 |
|
|
|
153 |
|
154 |
### Dataset Curators
|
155 |
|
|
|
156 |
|
157 |
### Licensing Information
|
158 |
|
159 |
+
This data is available and distributed under Apache-2.0 license
|
160 |
|
161 |
### Citation Information
|
162 |
|
163 |
```
|
164 |
+
tbd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
165 |
|
166 |
```
|
167 |
|
168 |
|
169 |
### Contributions
|
170 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|