Datasets:
Tasks:
Text Classification
Formats:
parquet
Sub-tasks:
multi-label-classification
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Commit
·
63c1651
1
Parent(s):
3d2ec16
Update parquet files
Browse files
README.md
DELETED
@@ -1,223 +0,0 @@
|
|
1 |
-
---
|
2 |
-
language:
|
3 |
-
- en
|
4 |
-
paperswithcode_id: null
|
5 |
-
pretty_name: CivilComments
|
6 |
-
dataset_info:
|
7 |
-
features:
|
8 |
-
- name: text
|
9 |
-
dtype: string
|
10 |
-
- name: toxicity
|
11 |
-
dtype: float32
|
12 |
-
- name: severe_toxicity
|
13 |
-
dtype: float32
|
14 |
-
- name: obscene
|
15 |
-
dtype: float32
|
16 |
-
- name: threat
|
17 |
-
dtype: float32
|
18 |
-
- name: insult
|
19 |
-
dtype: float32
|
20 |
-
- name: identity_attack
|
21 |
-
dtype: float32
|
22 |
-
- name: sexual_explicit
|
23 |
-
dtype: float32
|
24 |
-
splits:
|
25 |
-
- name: test
|
26 |
-
num_bytes: 32073013
|
27 |
-
num_examples: 97320
|
28 |
-
- name: train
|
29 |
-
num_bytes: 596835730
|
30 |
-
num_examples: 1804874
|
31 |
-
- name: validation
|
32 |
-
num_bytes: 32326369
|
33 |
-
num_examples: 97320
|
34 |
-
download_size: 414947977
|
35 |
-
dataset_size: 661235112
|
36 |
-
---
|
37 |
-
|
38 |
-
# Dataset Card for "civil_comments"
|
39 |
-
|
40 |
-
## Table of Contents
|
41 |
-
- [Dataset Description](#dataset-description)
|
42 |
-
- [Dataset Summary](#dataset-summary)
|
43 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
44 |
-
- [Languages](#languages)
|
45 |
-
- [Dataset Structure](#dataset-structure)
|
46 |
-
- [Data Instances](#data-instances)
|
47 |
-
- [Data Fields](#data-fields)
|
48 |
-
- [Data Splits](#data-splits)
|
49 |
-
- [Dataset Creation](#dataset-creation)
|
50 |
-
- [Curation Rationale](#curation-rationale)
|
51 |
-
- [Source Data](#source-data)
|
52 |
-
- [Annotations](#annotations)
|
53 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
54 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
55 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
56 |
-
- [Discussion of Biases](#discussion-of-biases)
|
57 |
-
- [Other Known Limitations](#other-known-limitations)
|
58 |
-
- [Additional Information](#additional-information)
|
59 |
-
- [Dataset Curators](#dataset-curators)
|
60 |
-
- [Licensing Information](#licensing-information)
|
61 |
-
- [Citation Information](#citation-information)
|
62 |
-
- [Contributions](#contributions)
|
63 |
-
|
64 |
-
## Dataset Description
|
65 |
-
|
66 |
-
- **Homepage:** [https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data)
|
67 |
-
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
68 |
-
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
69 |
-
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
70 |
-
- **Size of downloaded dataset files:** 395.73 MB
|
71 |
-
- **Size of the generated dataset:** 630.60 MB
|
72 |
-
- **Total amount of disk used:** 1026.33 MB
|
73 |
-
|
74 |
-
### Dataset Summary
|
75 |
-
|
76 |
-
The comments in this dataset come from an archive of the Civil Comments
|
77 |
-
platform, a commenting plugin for independent news sites. These public comments
|
78 |
-
were created from 2015 - 2017 and appeared on approximately 50 English-language
|
79 |
-
news sites across the world. When Civil Comments shut down in 2017, they chose
|
80 |
-
to make the public comments available in a lasting open archive to enable future
|
81 |
-
research. The original data, published on figshare, includes the public comment
|
82 |
-
text, some associated metadata such as article IDs, timestamps and
|
83 |
-
commenter-generated "civility" labels, but does not include user ids. Jigsaw
|
84 |
-
extended this dataset by adding additional labels for toxicity and identity
|
85 |
-
mentions. This data set is an exact replica of the data released for the
|
86 |
-
Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This
|
87 |
-
dataset is released under CC0, as is the underlying comment text.
|
88 |
-
|
89 |
-
### Supported Tasks and Leaderboards
|
90 |
-
|
91 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
92 |
-
|
93 |
-
### Languages
|
94 |
-
|
95 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
96 |
-
|
97 |
-
## Dataset Structure
|
98 |
-
|
99 |
-
### Data Instances
|
100 |
-
|
101 |
-
#### default
|
102 |
-
|
103 |
-
- **Size of downloaded dataset files:** 395.73 MB
|
104 |
-
- **Size of the generated dataset:** 630.60 MB
|
105 |
-
- **Total amount of disk used:** 1026.33 MB
|
106 |
-
|
107 |
-
An example of 'validation' looks as follows.
|
108 |
-
```
|
109 |
-
{
|
110 |
-
"identity_attack": 0.0,
|
111 |
-
"insult": 0.0,
|
112 |
-
"obscene": 0.0,
|
113 |
-
"severe_toxicity": 0.0,
|
114 |
-
"sexual_explicit": 0.0,
|
115 |
-
"text": "The public test.",
|
116 |
-
"threat": 0.0,
|
117 |
-
"toxicity": 0.0
|
118 |
-
}
|
119 |
-
```
|
120 |
-
|
121 |
-
### Data Fields
|
122 |
-
|
123 |
-
The data fields are the same among all splits.
|
124 |
-
|
125 |
-
#### default
|
126 |
-
- `text`: a `string` feature.
|
127 |
-
- `toxicity`: a `float32` feature.
|
128 |
-
- `severe_toxicity`: a `float32` feature.
|
129 |
-
- `obscene`: a `float32` feature.
|
130 |
-
- `threat`: a `float32` feature.
|
131 |
-
- `insult`: a `float32` feature.
|
132 |
-
- `identity_attack`: a `float32` feature.
|
133 |
-
- `sexual_explicit`: a `float32` feature.
|
134 |
-
|
135 |
-
### Data Splits
|
136 |
-
|
137 |
-
| name | train |validation|test |
|
138 |
-
|-------|------:|---------:|----:|
|
139 |
-
|default|1804874| 97320|97320|
|
140 |
-
|
141 |
-
## Dataset Creation
|
142 |
-
|
143 |
-
### Curation Rationale
|
144 |
-
|
145 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
146 |
-
|
147 |
-
### Source Data
|
148 |
-
|
149 |
-
#### Initial Data Collection and Normalization
|
150 |
-
|
151 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
152 |
-
|
153 |
-
#### Who are the source language producers?
|
154 |
-
|
155 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
156 |
-
|
157 |
-
### Annotations
|
158 |
-
|
159 |
-
#### Annotation process
|
160 |
-
|
161 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
162 |
-
|
163 |
-
#### Who are the annotators?
|
164 |
-
|
165 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
166 |
-
|
167 |
-
### Personal and Sensitive Information
|
168 |
-
|
169 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
170 |
-
|
171 |
-
## Considerations for Using the Data
|
172 |
-
|
173 |
-
### Social Impact of Dataset
|
174 |
-
|
175 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
176 |
-
|
177 |
-
### Discussion of Biases
|
178 |
-
|
179 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
180 |
-
|
181 |
-
### Other Known Limitations
|
182 |
-
|
183 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
184 |
-
|
185 |
-
## Additional Information
|
186 |
-
|
187 |
-
### Dataset Curators
|
188 |
-
|
189 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
190 |
-
|
191 |
-
### Licensing Information
|
192 |
-
|
193 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
194 |
-
|
195 |
-
### Citation Information
|
196 |
-
|
197 |
-
```
|
198 |
-
|
199 |
-
@article{DBLP:journals/corr/abs-1903-04561,
|
200 |
-
author = {Daniel Borkan and
|
201 |
-
Lucas Dixon and
|
202 |
-
Jeffrey Sorensen and
|
203 |
-
Nithum Thain and
|
204 |
-
Lucy Vasserman},
|
205 |
-
title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text
|
206 |
-
Classification},
|
207 |
-
journal = {CoRR},
|
208 |
-
volume = {abs/1903.04561},
|
209 |
-
year = {2019},
|
210 |
-
url = {http://arxiv.org/abs/1903.04561},
|
211 |
-
archivePrefix = {arXiv},
|
212 |
-
eprint = {1903.04561},
|
213 |
-
timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
|
214 |
-
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},
|
215 |
-
bibsource = {dblp computer science bibliography, https://dblp.org}
|
216 |
-
}
|
217 |
-
|
218 |
-
```
|
219 |
-
|
220 |
-
|
221 |
-
### Contributions
|
222 |
-
|
223 |
-
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
civil_comments.py
DELETED
@@ -1,148 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
# Lint as: python3
|
17 |
-
"""CivilComments from Jigsaw Unintended Bias Kaggle Competition."""
|
18 |
-
|
19 |
-
|
20 |
-
import csv
|
21 |
-
import os
|
22 |
-
|
23 |
-
import datasets
|
24 |
-
|
25 |
-
|
26 |
-
_CITATION = """
|
27 |
-
@article{DBLP:journals/corr/abs-1903-04561,
|
28 |
-
author = {Daniel Borkan and
|
29 |
-
Lucas Dixon and
|
30 |
-
Jeffrey Sorensen and
|
31 |
-
Nithum Thain and
|
32 |
-
Lucy Vasserman},
|
33 |
-
title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text
|
34 |
-
Classification},
|
35 |
-
journal = {CoRR},
|
36 |
-
volume = {abs/1903.04561},
|
37 |
-
year = {2019},
|
38 |
-
url = {http://arxiv.org/abs/1903.04561},
|
39 |
-
archivePrefix = {arXiv},
|
40 |
-
eprint = {1903.04561},
|
41 |
-
timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
|
42 |
-
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},
|
43 |
-
bibsource = {dblp computer science bibliography, https://dblp.org}
|
44 |
-
}
|
45 |
-
"""
|
46 |
-
|
47 |
-
_DESCRIPTION = """
|
48 |
-
The comments in this dataset come from an archive of the Civil Comments
|
49 |
-
platform, a commenting plugin for independent news sites. These public comments
|
50 |
-
were created from 2015 - 2017 and appeared on approximately 50 English-language
|
51 |
-
news sites across the world. When Civil Comments shut down in 2017, they chose
|
52 |
-
to make the public comments available in a lasting open archive to enable future
|
53 |
-
research. The original data, published on figshare, includes the public comment
|
54 |
-
text, some associated metadata such as article IDs, timestamps and
|
55 |
-
commenter-generated "civility" labels, but does not include user ids. Jigsaw
|
56 |
-
extended this dataset by adding additional labels for toxicity and identity
|
57 |
-
mentions. This data set is an exact replica of the data released for the
|
58 |
-
Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This
|
59 |
-
dataset is released under CC0, as is the underlying comment text.
|
60 |
-
"""
|
61 |
-
|
62 |
-
_DOWNLOAD_URL = "https://storage.googleapis.com/jigsaw-unintended-bias-in-toxicity-classification/civil_comments.zip"
|
63 |
-
|
64 |
-
|
65 |
-
class CivilComments(datasets.GeneratorBasedBuilder):
|
66 |
-
"""Classification and tagging of 2M comments on news sites.
|
67 |
-
|
68 |
-
This version of the CivilComments Dataset provides access to the primary
|
69 |
-
seven labels that were annotated by crowd workers, the toxicity and other
|
70 |
-
tags are a value between 0 and 1 indicating the fraction of annotators that
|
71 |
-
assigned these attributes to the comment text.
|
72 |
-
|
73 |
-
The other tags, which are only available for a fraction of the input examples
|
74 |
-
are currently ignored, as are all of the attributes that were part of the
|
75 |
-
original civil comments release. See the Kaggle documentation for more
|
76 |
-
details about the available features.
|
77 |
-
"""
|
78 |
-
|
79 |
-
VERSION = datasets.Version("0.9.0")
|
80 |
-
|
81 |
-
def _info(self):
|
82 |
-
return datasets.DatasetInfo(
|
83 |
-
description=_DESCRIPTION,
|
84 |
-
# datasets.features.FeatureConnectors
|
85 |
-
features=datasets.Features(
|
86 |
-
{
|
87 |
-
"text": datasets.Value("string"),
|
88 |
-
"toxicity": datasets.Value("float32"),
|
89 |
-
"severe_toxicity": datasets.Value("float32"),
|
90 |
-
"obscene": datasets.Value("float32"),
|
91 |
-
"threat": datasets.Value("float32"),
|
92 |
-
"insult": datasets.Value("float32"),
|
93 |
-
"identity_attack": datasets.Value("float32"),
|
94 |
-
"sexual_explicit": datasets.Value("float32"),
|
95 |
-
}
|
96 |
-
),
|
97 |
-
# The supervised_keys version is very impoverished.
|
98 |
-
supervised_keys=("text", "toxicity"),
|
99 |
-
homepage="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data",
|
100 |
-
citation=_CITATION,
|
101 |
-
)
|
102 |
-
|
103 |
-
def _split_generators(self, dl_manager):
|
104 |
-
"""Returns SplitGenerators."""
|
105 |
-
dl_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
|
106 |
-
return [
|
107 |
-
datasets.SplitGenerator(
|
108 |
-
name=datasets.Split.TRAIN,
|
109 |
-
gen_kwargs={"filename": os.path.join(dl_path, "train.csv"), "toxicity_label": "target"},
|
110 |
-
),
|
111 |
-
datasets.SplitGenerator(
|
112 |
-
name=datasets.Split.VALIDATION,
|
113 |
-
gen_kwargs={
|
114 |
-
"filename": os.path.join(dl_path, "test_public_expanded.csv"),
|
115 |
-
"toxicity_label": "toxicity",
|
116 |
-
},
|
117 |
-
),
|
118 |
-
datasets.SplitGenerator(
|
119 |
-
name=datasets.Split.TEST,
|
120 |
-
gen_kwargs={
|
121 |
-
"filename": os.path.join(dl_path, "test_private_expanded.csv"),
|
122 |
-
"toxicity_label": "toxicity",
|
123 |
-
},
|
124 |
-
),
|
125 |
-
]
|
126 |
-
|
127 |
-
def _generate_examples(self, filename, toxicity_label):
|
128 |
-
"""Yields examples.
|
129 |
-
|
130 |
-
Each example contains a text input and then seven annotation labels.
|
131 |
-
|
132 |
-
Args:
|
133 |
-
filename: the path of the file to be read for this split.
|
134 |
-
toxicity_label: indicates 'target' or 'toxicity' to capture the variation
|
135 |
-
in the released labels for this dataset.
|
136 |
-
|
137 |
-
Yields:
|
138 |
-
A dictionary of features, all floating point except the input text.
|
139 |
-
"""
|
140 |
-
with open(filename, encoding="utf-8") as f:
|
141 |
-
reader = csv.DictReader(f)
|
142 |
-
for row in reader:
|
143 |
-
example = {}
|
144 |
-
example["text"] = row["comment_text"]
|
145 |
-
example["toxicity"] = float(row[toxicity_label])
|
146 |
-
for label in ["severe_toxicity", "obscene", "threat", "insult", "identity_attack", "sexual_explicit"]:
|
147 |
-
example[label] = float(row[label])
|
148 |
-
yield row["id"], example
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"default": {"description": "\nThe comments in this dataset come from an archive of the Civil Comments\nplatform, a commenting plugin for independent news sites. These public comments\nwere created from 2015 - 2017 and appeared on approximately 50 English-language\nnews sites across the world. When Civil Comments shut down in 2017, they chose\nto make the public comments available in a lasting open archive to enable future\nresearch. The original data, published on figshare, includes the public comment\ntext, some associated metadata such as article IDs, timestamps and\ncommenter-generated \"civility\" labels, but does not include user ids. Jigsaw\nextended this dataset by adding additional labels for toxicity and identity\nmentions. This data set is an exact replica of the data released for the\nJigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This\ndataset is released under CC0, as is the underlying comment text.\n", "citation": "\n@article{DBLP:journals/corr/abs-1903-04561,\n author = {Daniel Borkan and\n Lucas Dixon and\n Jeffrey Sorensen and\n Nithum Thain and\n Lucy Vasserman},\n title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text\n Classification},\n journal = {CoRR},\n volume = {abs/1903.04561},\n year = {2019},\n url = {http://arxiv.org/abs/1903.04561},\n archivePrefix = {arXiv},\n eprint = {1903.04561},\n timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "toxicity": {"dtype": "float32", "id": null, "_type": "Value"}, "severe_toxicity": {"dtype": "float32", "id": null, "_type": "Value"}, "obscene": {"dtype": "float32", "id": null, "_type": "Value"}, "threat": {"dtype": "float32", "id": null, "_type": "Value"}, "insult": {"dtype": "float32", "id": null, "_type": "Value"}, "identity_attack": {"dtype": "float32", "id": null, "_type": "Value"}, "sexual_explicit": {"dtype": "float32", "id": null, "_type": "Value"}}, "supervised_keys": {"input": "text", "output": "toxicity"}, "builder_name": "civil_comments", "config_name": "default", "version": {"version_str": "0.9.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 9, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 32073013, "num_examples": 97320, "dataset_name": "civil_comments"}, "train": {"name": "train", "num_bytes": 596835730, "num_examples": 1804874, "dataset_name": "civil_comments"}, "validation": {"name": "validation", "num_bytes": 32326369, "num_examples": 97320, "dataset_name": "civil_comments"}}, "download_checksums": {"https://storage.googleapis.com/jigsaw-unintended-bias-in-toxicity-classification/civil_comments.zip": {"num_bytes": 414947977, "checksum": "767b71a3d9dc7a2eceb234d0c3e7e38604e11f59c12ba1cbb888ffd4ce6b6271"}}, "download_size": 414947977, "dataset_size": 661235112, "size_in_bytes": 1076183089}}
|
|
|
|
default/civil_comments-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d46cf9612e00db8e459c56287725d5e511d4b51249588a0f857a248c879948d
|
3 |
+
size 20800264
|
default/civil_comments-train-00000-of-00002.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2568c4a7db2c4447af707a9e50881774302c2350c184313ea069ad3c5af32b1d
|
3 |
+
size 318751522
|
default/civil_comments-train-00001-of-00002.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f74f723c1c45b3c46163e03e0e01b0ed71af38ea912df3a86506560d4de3ddaa
|
3 |
+
size 61561648
|
default/civil_comments-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0dabee0eacbf56ad9c23f1f79fc31771e8ac67078460315cfe117875a2c2e13d
|
3 |
+
size 20955568
|