Datasets:

License:
Joanna Baran commited on
Commit
4d6d751
·
1 Parent(s): da643fc

initial commit

Browse files
.gitattributes CHANGED
@@ -53,3 +53,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -8,4 +8,135 @@ language:
8
  pretty_name: Natural Language Inference datasets
9
  size_categories:
10
  - 1K<n<10K
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  pretty_name: Natural Language Inference datasets
9
  size_categories:
10
  - 1K<n<10K
11
+ ---
12
+
13
+ # Natural Language Inference datasets
14
+
15
+ ## Table of Contents
16
+
17
+ - [Dataset Description](#dataset-description)
18
+ - [Dataset Summary](#dataset-summary)
19
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
20
+ - [Languages](#languages)
21
+ - [Dataset Structure](#dataset-structure)
22
+ - [Data Instances](#data-instances)
23
+ - [Data Fields](#data-fields)
24
+ - [Data Splits](#data-splits)
25
+ - [Dataset Creation](#dataset-creation)
26
+ - [Curation Rationale](#curation-rationale)
27
+ - [Source Data](#source-data)
28
+ - [Annotations](#annotations)
29
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
30
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
31
+ - [Social Impact of Dataset](#social-impact-of-dataset)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Other Known Limitations](#other-known-limitations)
34
+ - [Additional Information](#additional-information)
35
+ - [Dataset Curators](#dataset-curators)
36
+ - [Licensing Information](#licensing-information)
37
+ - [Citation Information](#citation-information)
38
+ - [Contributions](#contributions)
39
+
40
+ ## Dataset Description
41
+
42
+ - **Homepage:**
43
+ - **Repository:**
44
+ - **Paper:**
45
+ - **Point of Contact:** [email protected]
46
+
47
+ ### Dataset Summary
48
+
49
+ Collection of sentence pairs annotated for the Natural Language Inference (NLI) task.
50
+ It consists of 2 distinct datasets:
51
+
52
+ - SNLI
53
+ - WNLI
54
+
55
+ ### Supported Tasks and Leaderboards
56
+
57
+ [More Information Needed]
58
+
59
+ ### Languages
60
+
61
+ Polish language, PL
62
+ English language, EN
63
+
64
+ ## Dataset Structure
65
+
66
+ ### Data Instances
67
+
68
+ Data are structured in JSONL format, each sample consists of sentence pair with a label.
69
+
70
+ ```
71
+ {
72
+ "id": 0,
73
+ "sentence_1": "The tourist is wandering on the beach.",
74
+ "sentence_2": "The tourist crosses the forest stream.",
75
+ "label": "contradiction"
76
+ }
77
+ ```
78
+
79
+ ### Data Fields
80
+
81
+ Description of json keys:
82
+
83
+ - `id`: identifier of the sentence pair
84
+ - `sentence_1`: first sentence
85
+ - `sentence_2`: second sentence
86
+ - `label`: NLI annotation
87
+ - `origin_label`: original annotation for Stanford NLI data (only in SNLI config)
88
+
89
+ ### Data Splits
90
+
91
+ We do not specify an exact data split for training and evaluation.
92
+
93
+ ## Dataset Creation
94
+
95
+ ### Curation Rationale
96
+
97
+ [More Information Needed]
98
+
99
+ ### Source Data
100
+
101
+ #### Initial Data Collection, Normalization and Post-processing
102
+
103
+ [More Information Needed]
104
+
105
+ ### Annotations
106
+
107
+ #### Annotation process
108
+
109
+ [More Information Needed]
110
+
111
+ #### Who are the annotators?
112
+
113
+ - professional linguists (mention all people involved)
114
+
115
+ ### Personal and Sensitive Information
116
+
117
+ The datasets do not contain any personal or sensitive information.
118
+
119
+ ## Considerations for Using the Data
120
+
121
+ ### Discussion of Biases
122
+
123
+ [More Information Needed]
124
+
125
+ ### Other Known Limitations
126
+
127
+ [More Information Needed]
128
+
129
+ ## Additional Information
130
+
131
+ ### Dataset Curators
132
+
133
+ Arkadiusz Janz ([email protected])
134
+
135
+ ### Licensing Information
136
+
137
+ SNLI [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
138
+ WNLI [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
139
+
140
+ ### Citation Information
141
+
142
+ [More Information Needed]
data/snli_nli_en.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b6b75f1fbbffefaf2322df513186dd122b78b60a6452536aa7e1c866b500660
3
+ size 247449
data/snli_nli_pl.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f035ef2ace1c9262c92a99ca7e29faa6e9e22f35005c25bfd5a919833a46c3e
3
+ size 259367
data/wnli_nli_en.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdb8e0bcea89b097b8a31867aeb175991fb087ed4e7e616918a9dc7ee0e14a1f
3
+ size 192772
data/wnli_nli_pl.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ecb40c0176604f9916b9852cd694851046f3ab704a3a95ed0e18db16243b824
3
+ size 194324
nli_datasets.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ import json
15
+
16
+ import datasets
17
+
18
+ logger = datasets.logging.get_logger(__name__)
19
+
20
+ _DESCRIPTION = """\
21
+ Data for NLI task annotated manually
22
+ """
23
+
24
+ _BASE_URL = "https://huggingface.co/datasets/clarin-knext/nli_datasets/resolve/main/data/"
25
+
26
+ _DATASET_NAME = [
27
+ "snli_pl",
28
+ "snli_en",
29
+ "wnli_pl",
30
+ "wnli_en",
31
+ ]
32
+
33
+ _URLS = {
34
+ f"{dataset}_{lang}": f"{_BASE_URL}{dataset}_nli_{lang}.jsonl"
35
+ for dataset, lang in (dataset_name.split('_') for dataset_name in _DATASET_NAME)
36
+ }
37
+
38
+
39
+ class NLIDatasetBuilderConfig(datasets.BuilderConfig):
40
+ def __init__(
41
+ self,
42
+ data_url: str,
43
+ name: str,
44
+ **kwargs,
45
+ ):
46
+ super(NLIDatasetBuilderConfig, self).__init__(
47
+ name=name,
48
+ version=datasets.Version("1.0.0"),
49
+ **kwargs,
50
+ )
51
+
52
+ self.name = name
53
+ self.data_url = data_url
54
+ if self.name not in _DATASET_NAME:
55
+ raise ValueError(
56
+ f"Config name `{self.name}` is not available. Enter one of: {_DATASET_NAME}"
57
+ )
58
+
59
+
60
+ class NLIDataset(datasets.GeneratorBasedBuilder):
61
+ BUILDER_CONFIGS = [
62
+ NLIDatasetBuilderConfig(
63
+ name=dataset,
64
+ data_url=_URLS[dataset],
65
+ description=f"Dataset {dataset} with NLI annotation.",
66
+ )
67
+ for dataset in _DATASET_NAME
68
+ ]
69
+
70
+ DEFAULT_CONFIG_NAME = "wnli_en"
71
+
72
+ def _info(self) -> datasets.DatasetInfo:
73
+ features = {
74
+ "id": datasets.Value("int32"),
75
+ "sentence_1": datasets.Value("string"),
76
+ "sentence_2": datasets.Value("string"),
77
+ "label": datasets.Value("string"),
78
+ }
79
+
80
+ if self.config.name.startswith('snli'):
81
+ features["origin_label"] = datasets.Value("string")
82
+
83
+ return datasets.DatasetInfo(
84
+ description=_DESCRIPTION,
85
+ features=datasets.Features(features),
86
+ supervised_keys=None,
87
+ # license=_LICENSE,
88
+ # citation=_CITATION,
89
+ )
90
+
91
+ def _split_generators(self, dl_manager):
92
+ filepath = dl_manager.download_and_extract(self.config.data_url)
93
+
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ gen_kwargs={
98
+ "filepath": filepath,
99
+ },
100
+ ),
101
+ ]
102
+
103
+ def _generate_examples(self, filepath: str):
104
+ key_iter = 0
105
+ with open(filepath, encoding="utf-8") as f:
106
+ for data in (json.loads(line) for line in f):
107
+ yield key_iter, data
108
+ key_iter += 1