Datasets:
parquet-converter
commited on
Commit
•
36629bd
1
Parent(s):
df3fd7b
Update parquet files
Browse files- .gitattributes +0 -54
- README.md +0 -104
- dataset_infos.json +0 -1
- default/si_nli-test.parquet +3 -0
- default/si_nli-train.parquet +3 -0
- default/si_nli-validation.parquet +3 -0
- si_nli.py +0 -123
.gitattributes
DELETED
@@ -1,54 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
27 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
-
# Audio files - uncompressed
|
37 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
38 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
39 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
40 |
-
# Audio files - compressed
|
41 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
42 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
43 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
44 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
46 |
-
# Image files - uncompressed
|
47 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
48 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
49 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
51 |
-
# Image files - compressed
|
52 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,104 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
-
language:
|
5 |
-
- sl
|
6 |
-
language_creators:
|
7 |
-
- found
|
8 |
-
- expert-generated
|
9 |
-
license:
|
10 |
-
- cc-by-nc-sa-4.0
|
11 |
-
multilinguality:
|
12 |
-
- monolingual
|
13 |
-
pretty_name: Slovene natural language inference dataset
|
14 |
-
size_categories:
|
15 |
-
- 1K<n<10K
|
16 |
-
source_datasets: []
|
17 |
-
tags: []
|
18 |
-
task_categories:
|
19 |
-
- text-classification
|
20 |
-
task_ids:
|
21 |
-
- multi-class-classification
|
22 |
-
- natural-language-inference
|
23 |
-
---
|
24 |
-
|
25 |
-
# Dataset Card for SI-NLI
|
26 |
-
|
27 |
-
### Dataset Summary
|
28 |
-
|
29 |
-
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
|
30 |
-
|
31 |
-
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
|
32 |
-
|
33 |
-
If you have access to the private test set (with labels), you can load it instead of the public one by setting the environment variable `SI_NLI_TEST_PATH` to the file path.
|
34 |
-
|
35 |
-
### Supported Tasks and Leaderboards
|
36 |
-
|
37 |
-
Natural language inference.
|
38 |
-
|
39 |
-
### Languages
|
40 |
-
|
41 |
-
Slovenian.
|
42 |
-
|
43 |
-
## Dataset Structure
|
44 |
-
|
45 |
-
### Data Instances
|
46 |
-
|
47 |
-
A sample instance from the dataset:
|
48 |
-
```
|
49 |
-
{
|
50 |
-
'pair_id': 'P0',
|
51 |
-
'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
|
52 |
-
'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
|
53 |
-
'annotation1': 'entailment',
|
54 |
-
'annotator1_id': 'annotator_C',
|
55 |
-
'annotation2': 'entailment',
|
56 |
-
'annotator2_id': 'annotator_A',
|
57 |
-
'annotation3': '',
|
58 |
-
'annotator3_id': '',
|
59 |
-
'annotation_final': 'entailment',
|
60 |
-
'label': 'entailment'
|
61 |
-
}
|
62 |
-
```
|
63 |
-
|
64 |
-
### Data Fields
|
65 |
-
|
66 |
-
- `pair_id`: string identifier of the pair (`""` in the test set),
|
67 |
-
- `premise`: premise sentence,
|
68 |
-
- `hypothesis`: hypothesis sentence,
|
69 |
-
- `annotation1`: the first annotation (`""` if not available),
|
70 |
-
- `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
|
71 |
-
- `annotation2`: the second annotation (`""` if not available),
|
72 |
-
- `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
|
73 |
-
- `annotation3`: the third annotation (`""` if not available),
|
74 |
-
- `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
|
75 |
-
- `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
|
76 |
-
- `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
|
77 |
-
|
78 |
-
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
|
79 |
-
|
80 |
-
## Additional Information
|
81 |
-
|
82 |
-
### Dataset Curators
|
83 |
-
|
84 |
-
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
|
85 |
-
|
86 |
-
### Licensing Information
|
87 |
-
|
88 |
-
CC BY-NC-SA 4.0.
|
89 |
-
|
90 |
-
### Citation Information
|
91 |
-
|
92 |
-
```
|
93 |
-
@misc{sinli,
|
94 |
-
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
|
95 |
-
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
|
96 |
-
url = {http://hdl.handle.net/11356/1707},
|
97 |
-
note = {Slovenian language resource repository {CLARIN}.{SI}},
|
98 |
-
year = {2022}
|
99 |
-
}
|
100 |
-
```
|
101 |
-
|
102 |
-
### Contributions
|
103 |
-
|
104 |
-
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"default": {"description": "SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs \n(premise and hypothesis) that are manually labeled with the labels \"entailment\", \"contradiction\", and \"neutral\". \nThe dataset was created using sentences that appear in the Slovenian reference corpus ccKres. \nAnnotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. \nThe dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) \nfor each candidate sentence pair.\n", "citation": "@misc{sinli,\n title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},\n author = {Klemen, Matej and {\u000b Z}agar, Ale{\u000b s} and {\u000b C}ibej, Jaka and Robnik-{\u000b S}ikonja, Marko},\n url = {http://hdl.handle.net/11356/1707},\n note = {Slovenian language resource repository {CLARIN}.{SI}},\n year = {2022}\n}\n", "homepage": "http://hdl.handle.net/11356/1707", "license": "Creative Commons - Attribution 4.0 International (CC BY 4.0)", "features": {"pair_id": {"dtype": "string", "id": null, "_type": "Value"}, "premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "annotation1": {"dtype": "string", "id": null, "_type": "Value"}, "annotator1_id": {"dtype": "string", "id": null, "_type": "Value"}, "annotation2": {"dtype": "string", "id": null, "_type": "Value"}, "annotator2_id": {"dtype": "string", "id": null, "_type": "Value"}, "annotation3": {"dtype": "string", "id": null, "_type": "Value"}, "annotator3_id": {"dtype": "string", "id": null, "_type": "Value"}, "annotation_final": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "si_nli", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1352635, "num_examples": 4392, "dataset_name": "si_nli"}, "validation": {"name": "validation", "num_bytes": 164561, "num_examples": 547, "dataset_name": "si_nli"}, "test": {"name": "test", "num_bytes": 246518, "num_examples": 998, "dataset_name": "si_nli"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1707/SI-NLI.zip": {"num_bytes": 410093, "checksum": "e2b3be92049ebb68916a236465940880aeb002d148dce7df94b71e8779080274"}}, "download_size": 410093, "post_processing_size": null, "dataset_size": 1763714, "size_in_bytes": 2173807}}
|
|
|
|
default/si_nli-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:df584ac784de14bdf62987437aee601374caf650284ced7bee0b1156085de1fb
|
3 |
+
size 107320
|
default/si_nli-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f260282b2312bd1b12c0887a54e7264a977d09b4c816832cb464cad933368055
|
3 |
+
size 507509
|
default/si_nli-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf2bfcae412f979533f1863031bc2128d04c155b52209f4b7b65fd0b07a7994c
|
3 |
+
size 63094
|
si_nli.py
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
"""SI-NLI is a Slovene natural language inference dataset."""
|
2 |
-
|
3 |
-
|
4 |
-
import csv
|
5 |
-
import json
|
6 |
-
import os
|
7 |
-
|
8 |
-
import datasets
|
9 |
-
|
10 |
-
|
11 |
-
_CITATION = """\
|
12 |
-
@misc{sinli,
|
13 |
-
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
|
14 |
-
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
|
15 |
-
url = {http://hdl.handle.net/11356/1707},
|
16 |
-
note = {Slovenian language resource repository {CLARIN}.{SI}},
|
17 |
-
year = {2022}
|
18 |
-
}
|
19 |
-
"""
|
20 |
-
|
21 |
-
_DESCRIPTION = """\
|
22 |
-
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs
|
23 |
-
(premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral".
|
24 |
-
The dataset was created using sentences that appear in the Slovenian reference corpus ccKres.
|
25 |
-
Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels.
|
26 |
-
The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral)
|
27 |
-
for each candidate sentence pair.
|
28 |
-
"""
|
29 |
-
|
30 |
-
_HOMEPAGE = "http://hdl.handle.net/11356/1707"
|
31 |
-
|
32 |
-
_LICENSE = "Creative Commons - Attribution 4.0 International (CC BY 4.0)"
|
33 |
-
|
34 |
-
_URLS = {
|
35 |
-
"si-nli": "https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1707/SI-NLI.zip"
|
36 |
-
}
|
37 |
-
|
38 |
-
NA_STR = ""
|
39 |
-
UNIFIED_LABELS = {"E": "entailment", "N": "neutral", "C": "contradiction"}
|
40 |
-
|
41 |
-
|
42 |
-
class SINLI(datasets.GeneratorBasedBuilder):
|
43 |
-
"""SI-NLI is a Slovene natural language inference dataset."""
|
44 |
-
|
45 |
-
VERSION = datasets.Version("1.0.0")
|
46 |
-
|
47 |
-
def _info(self):
|
48 |
-
features = datasets.Features({
|
49 |
-
"pair_id": datasets.Value("string"),
|
50 |
-
"premise": datasets.Value("string"),
|
51 |
-
"hypothesis": datasets.Value("string"),
|
52 |
-
"annotation1": datasets.Value("string"),
|
53 |
-
"annotator1_id": datasets.Value("string"),
|
54 |
-
"annotation2": datasets.Value("string"),
|
55 |
-
"annotator2_id": datasets.Value("string"),
|
56 |
-
"annotation3": datasets.Value("string"),
|
57 |
-
"annotator3_id": datasets.Value("string"),
|
58 |
-
"annotation_final": datasets.Value("string"),
|
59 |
-
"label": datasets.Value("string")
|
60 |
-
})
|
61 |
-
|
62 |
-
return datasets.DatasetInfo(
|
63 |
-
description=_DESCRIPTION,
|
64 |
-
features=features,
|
65 |
-
homepage=_HOMEPAGE,
|
66 |
-
license=_LICENSE,
|
67 |
-
citation=_CITATION
|
68 |
-
)
|
69 |
-
|
70 |
-
def _split_generators(self, dl_manager):
|
71 |
-
urls = _URLS["si-nli"]
|
72 |
-
data_dir = dl_manager.download_and_extract(urls)
|
73 |
-
return [
|
74 |
-
datasets.SplitGenerator(
|
75 |
-
name=datasets.Split.TRAIN,
|
76 |
-
gen_kwargs={
|
77 |
-
"file_path": os.path.join(data_dir, "SI-NLI", "train.tsv"),
|
78 |
-
"split": "train"
|
79 |
-
}
|
80 |
-
),
|
81 |
-
datasets.SplitGenerator(
|
82 |
-
name=datasets.Split.VALIDATION,
|
83 |
-
gen_kwargs={
|
84 |
-
"file_path": os.path.join(data_dir, "SI-NLI", "dev.tsv"),
|
85 |
-
"split": "dev"
|
86 |
-
}
|
87 |
-
),
|
88 |
-
datasets.SplitGenerator(
|
89 |
-
name=datasets.Split.TEST,
|
90 |
-
gen_kwargs={
|
91 |
-
# Allow the user to load the private test set with this script if they have access to it
|
92 |
-
"file_path": os.getenv("SI_NLI_TEST_PATH", os.path.join(data_dir, "SI-NLI", "test.tsv")),
|
93 |
-
"split": "test"
|
94 |
-
}
|
95 |
-
)
|
96 |
-
]
|
97 |
-
|
98 |
-
def _generate_examples(self, file_path, split):
|
99 |
-
with open(file_path, encoding="utf-8") as f:
|
100 |
-
reader = csv.reader(f, delimiter="\t", quotechar='"')
|
101 |
-
header = next(reader)
|
102 |
-
|
103 |
-
for i, row in enumerate(reader):
|
104 |
-
pair_id = annotation1 = annotator1_id = annotation2 = annotator2_id = annotation3 = annotator3_id = \
|
105 |
-
annotation_final = label = NA_STR
|
106 |
-
|
107 |
-
# Public test set only contains the premise and the hypothesis
|
108 |
-
if len(row) == 2:
|
109 |
-
premise, hypothesis = row
|
110 |
-
# Public train/validation set and private test set contain additional annotation data
|
111 |
-
else:
|
112 |
-
pair_id, premise, hypothesis, annotation1, _, annotator1_id, annotation2, _, annotator2_id, \
|
113 |
-
annotation3, _, annotator3_id, annotation_final, label = row
|
114 |
-
|
115 |
-
yield i, {
|
116 |
-
"pair_id": pair_id,
|
117 |
-
"premise": premise, "hypothesis": hypothesis,
|
118 |
-
"annotation1": UNIFIED_LABELS.get(annotation1, annotation1), "annotator1_id": annotator1_id,
|
119 |
-
"annotation2": UNIFIED_LABELS.get(annotation2, annotation2), "annotator2_id": annotator2_id,
|
120 |
-
"annotation3": UNIFIED_LABELS.get(annotation3, annotation3), "annotator3_id": annotator3_id,
|
121 |
-
"annotation_final": UNIFIED_LABELS.get(annotation_final, annotation_final),
|
122 |
-
"label": label
|
123 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|