Datasets:
Commit
·
e86b3e1
1
Parent(s):
a5fc1ad
Update parquet files
Browse files- .gitattributes +0 -51
- README.md +0 -175
- all_data/ssj500k-train.parquet +3 -0
- dataset_infos.json +0 -1
- dependency_parsing_jos/ssj500k-train.parquet +3 -0
- dependency_parsing_ud/ssj500k-train.parquet +3 -0
- multiword_expressions/ssj500k-train.parquet +3 -0
- named_entity_recognition/ssj500k-train.parquet +3 -0
- semantic_role_labeling/ssj500k-train.parquet +3 -0
- ssj500k.py +0 -365
.gitattributes
DELETED
@@ -1,51 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
24 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
33 |
-
# Audio files - uncompressed
|
34 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
37 |
-
# Audio files - compressed
|
38 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
39 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
41 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
42 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
43 |
-
# Image files - uncompressed
|
44 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
47 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
48 |
-
# Image files - compressed
|
49 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,175 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
- expert-generated
|
7 |
-
language:
|
8 |
-
- sl
|
9 |
-
license:
|
10 |
-
- cc-by-nc-sa-4.0
|
11 |
-
multilinguality:
|
12 |
-
- monolingual
|
13 |
-
size_categories:
|
14 |
-
- 1K<n<10K
|
15 |
-
- 10K<n<100K
|
16 |
-
source_datasets: []
|
17 |
-
task_categories:
|
18 |
-
- token-classification
|
19 |
-
task_ids:
|
20 |
-
- named-entity-recognition
|
21 |
-
- part-of-speech
|
22 |
-
- lemmatization
|
23 |
-
- parsing
|
24 |
-
pretty_name: ssj500k
|
25 |
-
tags:
|
26 |
-
- semantic-role-labeling
|
27 |
-
- multiword-expression-detection
|
28 |
-
---
|
29 |
-
|
30 |
-
# Dataset Card for ssj500k
|
31 |
-
|
32 |
-
**Important**: there exists another HF implementation of the dataset ([classla/ssj500k](https://huggingface.co/datasets/classla/ssj500k)), but it seems to be more narrowly focused. **This implementation is designed for more general use** - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data.
|
33 |
-
|
34 |
-
### Dataset Summary
|
35 |
-
|
36 |
-
The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks:
|
37 |
-
- named entity recognition (config `named_entity_recognition`)
|
38 |
-
- dependency parsing(*), Universal Dependencies style (config `dependency_parsing_ud`)
|
39 |
-
- dependency parsing, JOS/MULTEXT-East style (config `dependency_parsing_jos`)
|
40 |
-
- semantic role labeling (config `semantic_role_labeling`)
|
41 |
-
- multi-word expressions (config `multiword_expressions`)
|
42 |
-
|
43 |
-
If you want to load all the data along with their partial annotations, please use the config `all_data`.
|
44 |
-
|
45 |
-
\* _The UD dependency parsing labels are included here for completeness, but using the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._
|
46 |
-
|
47 |
-
### Supported Tasks and Leaderboards
|
48 |
-
|
49 |
-
Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection.
|
50 |
-
|
51 |
-
### Languages
|
52 |
-
|
53 |
-
Slovenian.
|
54 |
-
|
55 |
-
## Dataset Structure
|
56 |
-
|
57 |
-
### Data Instances
|
58 |
-
|
59 |
-
A sample instance from the dataset (using the config `all_data`):
|
60 |
-
```
|
61 |
-
{
|
62 |
-
'id_doc': 'ssj1',
|
63 |
-
'idx_par': 0,
|
64 |
-
'idx_sent': 0,
|
65 |
-
'id_words': ['ssj1.1.1.t1', 'ssj1.1.1.t2', 'ssj1.1.1.t3', 'ssj1.1.1.t4', 'ssj1.1.1.t5', 'ssj1.1.1.t6', 'ssj1.1.1.t7', 'ssj1.1.1.t8', 'ssj1.1.1.t9', 'ssj1.1.1.t10', 'ssj1.1.1.t11', 'ssj1.1.1.t12', 'ssj1.1.1.t13', 'ssj1.1.1.t14', 'ssj1.1.1.t15', 'ssj1.1.1.t16', 'ssj1.1.1.t17', 'ssj1.1.1.t18', 'ssj1.1.1.t19', 'ssj1.1.1.t20', 'ssj1.1.1.t21', 'ssj1.1.1.t22', 'ssj1.1.1.t23', 'ssj1.1.1.t24'],
|
66 |
-
'words': ['"', 'Tistega', 'večera', 'sem', 'preveč', 'popil', ',', 'zgodilo', 'se', 'je', 'mesec', 'dni', 'po', 'tem', ',', 'ko', 'sem', 'izvedel', ',', 'da', 'me', 'žena', 'vara', '.'],
|
67 |
-
'lemmas': ['"', 'tisti', 'večer', 'biti', 'preveč', 'popiti', ',', 'zgoditi', 'se', 'biti', 'mesec', 'dan', 'po', 'ta', ',', 'ko', 'biti', 'izvedeti', ',', 'da', 'jaz', 'žena', 'varati', '.'],
|
68 |
-
'msds': ['UPosTag=PUNCT', 'UPosTag=DET|Case=Gen|Gender=Masc|Number=Sing|PronType=Dem', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Sing', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=DET|PronType=Ind', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=VERB|Aspect=Perf|Gender=Neut|Number=Sing|VerbForm=Part', 'UPosTag=PRON|PronType=Prs|Reflex=Yes|Variant=Short', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=NOUN|Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Plur', 'UPosTag=ADP|Case=Loc', 'UPosTag=DET|Case=Loc|Gender=Neut|Number=Sing|PronType=Dem', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=PRON|Case=Acc|Number=Sing|Person=1|PronType=Prs|Variant=Short', 'UPosTag=NOUN|Case=Nom|Gender=Fem|Number=Sing', 'UPosTag=VERB|Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'UPosTag=PUNCT'],
|
69 |
-
'has_ne_ann': True,
|
70 |
-
'has_ud_dep_ann': True,
|
71 |
-
'has_jos_dep_ann': True,
|
72 |
-
'has_srl_ann': True,
|
73 |
-
'has_mwe_ann': True,
|
74 |
-
'ne_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'],
|
75 |
-
'ud_dep_head': [5, 2, 5, 5, 5, -1, 7, 5, 7, 7, 7, 10, 13, 10, 17, 17, 17, 13, 22, 22, 22, 22, 17, 5],
|
76 |
-
'ud_dep_rel': ['punct', 'det', 'obl', 'aux', 'advmod', 'root', 'punct', 'parataxis', 'expl', 'aux', 'obl', 'nmod', 'case', 'nmod', 'punct', 'mark', 'aux', 'acl', 'punct', 'mark', 'obj', 'nsubj', 'ccomp', 'punct'],
|
77 |
-
'jos_dep_head': [-1, 2, 5, 5, 5, -1, -1, -1, 7, 7, 7, 10, 13, 10, -1, 17, 17, 13, -1, 22, 22, 22, 17, -1],
|
78 |
-
'jos_dep_rel': ['Root', 'Atr', 'AdvO', 'PPart', 'AdvM', 'Root', 'Root', 'Root', 'PPart', 'PPart', 'AdvO', 'Atr', 'Atr', 'Atr', 'Root', 'Conj', 'PPart', 'Atr', 'Root', 'Conj', 'Obj', 'Sb', 'Obj', 'Root'],
|
79 |
-
'srl_info': [
|
80 |
-
{'idx_arg': 2, 'idx_head': 5, 'role': 'TIME'},
|
81 |
-
{'idx_arg': 4, 'idx_head': 5, 'role': 'QUANT'},
|
82 |
-
{'idx_arg': 10, 'idx_head': 7, 'role': 'TIME'},
|
83 |
-
{'idx_arg': 20, 'idx_head': 22, 'role': 'PAT'},
|
84 |
-
{'idx_arg': 21, 'idx_head': 22, 'role': 'ACT'},
|
85 |
-
{'idx_arg': 22, 'idx_head': 17, 'role': 'RESLT'}
|
86 |
-
],
|
87 |
-
'mwe_info': [
|
88 |
-
{'type': 'IRV', 'word_indices': [7, 8]}
|
89 |
-
]
|
90 |
-
}
|
91 |
-
```
|
92 |
-
|
93 |
-
### Data Fields
|
94 |
-
|
95 |
-
The following attributes are present in the most general config (`all_data`). Please see below for attributes present in the specific configs.
|
96 |
-
- `id_doc`: a string containing the identifier of the document;
|
97 |
-
- `idx_par`: an int32 containing the consecutive number of the paragraph, which the current sentence is a part of;
|
98 |
-
- `idx_sent`: an int32 containing the consecutive number of the current sentence inside the current paragraph;
|
99 |
-
- `id_words`: a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149;
|
100 |
-
- `words`: a list of strings containing the words in the current sentence;
|
101 |
-
- `lemmas`: a list of strings containing the lemmas in the current sentence;
|
102 |
-
- `msds`: a list of strings containing the morphosyntactic description of words in the current sentence;
|
103 |
-
- `has_ne_ann`: a bool indicating whether the current example has named entities annotated;
|
104 |
-
- `has_ud_dep_ann`: a bool indicating whether the current example has dependencies (in UD style) annotated;
|
105 |
-
- `has_jos_dep_ann`: a bool indicating whether the current example has dependencies (in JOS style) annotated;
|
106 |
-
- `has_srl_ann`: a bool indicating whether the current example has semantic roles annotated;
|
107 |
-
- `has_mwe_ann`: a bool indicating whether the current example has multi-word expressions annotated;
|
108 |
-
- `ne_tags`: a list of strings containing the named entity tags encoded using IOB2 - if `has_ne_ann=False` all tokens are annotated with `"N/A"`;
|
109 |
-
- `ud_dep_head`: a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is `-1`; if `has_ud_dep_ann=False` all tokens are annotated with `-2`;
|
110 |
-
- `ud_dep_rel`: a list of strings containing the relation with the head for each word (using UD guidelines) - if `has_ud_dep_ann=False` all tokens are annotated with `"N/A"`;
|
111 |
-
- `jos_dep_head`: a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is `-1`; if `has_jos_dep_ann=False` all tokens are annotated with `-2`;
|
112 |
-
- `jos_dep_rel`: a list of strings containing the relation with the head for each word (using JOS guidelines) - if `has_jos_dep_ann=False` all tokens are annotated with `"N/A"`;
|
113 |
-
- `srl_info`: a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if `has_srl_ann=False` this list is empty;
|
114 |
-
- `mwe_info`: a list of dicts, each containing word indices and the type of a multi-word expression;
|
115 |
-
|
116 |
-
#### Data fields in 'named_entity_recognition'
|
117 |
-
```
|
118 |
-
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ne_tags']
|
119 |
-
```
|
120 |
-
|
121 |
-
#### Data fields in 'dependency_parsing_ud'
|
122 |
-
```
|
123 |
-
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ud_dep_head', 'ud_dep_rel']
|
124 |
-
```
|
125 |
-
|
126 |
-
#### Data fields in 'dependency_parsing_jos'
|
127 |
-
```
|
128 |
-
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'jos_dep_head', 'jos_dep_rel']
|
129 |
-
```
|
130 |
-
|
131 |
-
#### Data fields in 'semantic_role_labeling'
|
132 |
-
```
|
133 |
-
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'srl_info']
|
134 |
-
```
|
135 |
-
|
136 |
-
#### Data fields in 'multiword_expressions'
|
137 |
-
```
|
138 |
-
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'mwe_info']
|
139 |
-
```
|
140 |
-
|
141 |
-
## Additional Information
|
142 |
-
|
143 |
-
### Dataset Curators
|
144 |
-
|
145 |
-
Simon Krek; et al. (please see http://hdl.handle.net/11356/1434 for the full list)
|
146 |
-
|
147 |
-
### Licensing Information
|
148 |
-
|
149 |
-
CC BY-NC-SA 4.0.
|
150 |
-
|
151 |
-
### Citation Information
|
152 |
-
|
153 |
-
The paper describing the dataset:
|
154 |
-
```
|
155 |
-
@InProceedings{krek2020ssj500k,
|
156 |
-
title = {The ssj500k Training Corpus for Slovene Language Processing},
|
157 |
-
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
|
158 |
-
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
|
159 |
-
year={2020},
|
160 |
-
pages={24-33}
|
161 |
-
}
|
162 |
-
```
|
163 |
-
|
164 |
-
The resource itself:
|
165 |
-
```
|
166 |
-
@misc{krek2021clarinssj500k,
|
167 |
-
title = {Training corpus ssj500k 2.3},
|
168 |
-
author = {Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Gantar, Polona and Kuzman, Taja and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja},
|
169 |
-
url = {http://hdl.handle.net/11356/1434},
|
170 |
-
year = {2021} }
|
171 |
-
```
|
172 |
-
|
173 |
-
### Contributions
|
174 |
-
|
175 |
-
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
all_data/ssj500k-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:92aa7fcd5412f4e091f6837f2f53331cb3a76808f5be8788176b42f1539b9855
|
3 |
+
size 11076342
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"all_data": {"description": "The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,\nsentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated \nwith syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also \nannotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the \nJOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.\n", "citation": "@InProceedings{krek2020ssj500k,\ntitle = {The ssj500k Training Corpus for Slovene Language Processing},\nauthor={Krek, Simon and Erjavec, Toma\u017e and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and \u010cibej, Jaka and Brank, Janez},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},\nyear={2020},\npages={24-33}\n}\n", "homepage": "http://hdl.handle.net/11356/1434", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "idx_par": {"dtype": "int32", "id": null, "_type": "Value"}, "idx_sent": {"dtype": "int32", "id": null, "_type": "Value"}, "id_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msds": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "has_ne_ann": {"dtype": "bool", "id": null, "_type": "Value"}, "has_ud_dep_ann": {"dtype": "bool", "id": null, "_type": "Value"}, "has_jos_dep_ann": {"dtype": "bool", "id": null, "_type": "Value"}, "has_srl_ann": {"dtype": "bool", "id": null, "_type": "Value"}, "has_mwe_ann": {"dtype": "bool", "id": null, "_type": "Value"}, "ne_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ud_dep_head": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ud_dep_rel": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "jos_dep_head": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "jos_dep_rel": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "srl_info": [{"idx_arg": {"dtype": "uint32", "id": null, "_type": "Value"}, "idx_head": {"dtype": "uint32", "id": null, "_type": "Value"}, "role": {"dtype": "string", "id": null, "_type": "Value"}}], "mwe_info": [{"type": {"dtype": "string", "id": null, "_type": "Value"}, "word_indices": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ssj500k", "config_name": "all_data", "version": {"version_str": "2.3.0", "description": null, "major": 2, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 69372407, "num_examples": 27829, "dataset_name": "ssj500k"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip": {"num_bytes": 13021836, "checksum": "08ac4d6cf74a45bc81f6e9ca53e7406c96c906c218cbb8ff2f7365e96655c460"}}, "download_size": 13021836, "post_processing_size": null, "dataset_size": 69372407, "size_in_bytes": 82394243}, "named_entity_recognition": {"description": "The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,\nsentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated \nwith syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also \nannotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the \nJOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.\n", "citation": "@InProceedings{krek2020ssj500k,\ntitle = {The ssj500k Training Corpus for Slovene Language Processing},\nauthor={Krek, Simon and Erjavec, Toma\u017e and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and \u010cibej, Jaka and Brank, Janez},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},\nyear={2020},\npages={24-33}\n}\n", "homepage": "http://hdl.handle.net/11356/1434", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "idx_par": {"dtype": "int32", "id": null, "_type": "Value"}, "idx_sent": {"dtype": "int32", "id": null, "_type": "Value"}, "id_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msds": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ssj500k", "config_name": "named_entity_recognition", "version": {"version_str": "2.3.0", "description": null, "major": 2, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 17651305, "num_examples": 9489, "dataset_name": "ssj500k"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip": {"num_bytes": 13021836, "checksum": "08ac4d6cf74a45bc81f6e9ca53e7406c96c906c218cbb8ff2f7365e96655c460"}}, "download_size": 13021836, "post_processing_size": null, "dataset_size": 17651305, "size_in_bytes": 30673141}, "dependency_parsing_ud": {"description": "The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,\nsentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated \nwith syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also \nannotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the \nJOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.\n", "citation": "@InProceedings{krek2020ssj500k,\ntitle = {The ssj500k Training Corpus for Slovene Language Processing},\nauthor={Krek, Simon and Erjavec, Toma\u017e and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and \u010cibej, Jaka and Brank, Janez},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},\nyear={2020},\npages={24-33}\n}\n", "homepage": "http://hdl.handle.net/11356/1434", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "idx_par": {"dtype": "int32", "id": null, "_type": "Value"}, "idx_sent": {"dtype": "int32", "id": null, "_type": "Value"}, "id_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msds": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ud_dep_head": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ud_dep_rel": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ssj500k", "config_name": "dependency_parsing_ud", "version": {"version_str": "2.3.0", "description": null, "major": 2, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 14048597, "num_examples": 8000, "dataset_name": "ssj500k"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip": {"num_bytes": 13021836, "checksum": "08ac4d6cf74a45bc81f6e9ca53e7406c96c906c218cbb8ff2f7365e96655c460"}}, "download_size": 13021836, "post_processing_size": null, "dataset_size": 14048597, "size_in_bytes": 27070433}, "dependency_parsing_jos": {"description": "The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,\nsentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated \nwith syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also \nannotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the \nJOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.\n", "citation": "@InProceedings{krek2020ssj500k,\ntitle = {The ssj500k Training Corpus for Slovene Language Processing},\nauthor={Krek, Simon and Erjavec, Toma\u017e and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and \u010cibej, Jaka and Brank, Janez},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},\nyear={2020},\npages={24-33}\n}\n", "homepage": "http://hdl.handle.net/11356/1434", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "idx_par": {"dtype": "int32", "id": null, "_type": "Value"}, "idx_sent": {"dtype": "int32", "id": null, "_type": "Value"}, "id_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msds": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "jos_dep_head": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "jos_dep_rel": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ssj500k", "config_name": "dependency_parsing_jos", "version": {"version_str": "2.3.0", "description": null, "major": 2, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 23027788, "num_examples": 11411, "dataset_name": "ssj500k"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip": {"num_bytes": 13021836, "checksum": "08ac4d6cf74a45bc81f6e9ca53e7406c96c906c218cbb8ff2f7365e96655c460"}}, "download_size": 13021836, "post_processing_size": null, "dataset_size": 23027788, "size_in_bytes": 36049624}, "semantic_role_labeling": {"description": "The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,\nsentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated \nwith syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also \nannotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the \nJOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.\n", "citation": "@InProceedings{krek2020ssj500k,\ntitle = {The ssj500k Training Corpus for Slovene Language Processing},\nauthor={Krek, Simon and Erjavec, Toma\u017e and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and \u010cibej, Jaka and Brank, Janez},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},\nyear={2020},\npages={24-33}\n}\n", "homepage": "http://hdl.handle.net/11356/1434", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "idx_par": {"dtype": "int32", "id": null, "_type": "Value"}, "idx_sent": {"dtype": "int32", "id": null, "_type": "Value"}, "id_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msds": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "srl_info": [{"idx_arg": {"dtype": "uint32", "id": null, "_type": "Value"}, "idx_head": {"dtype": "uint32", "id": null, "_type": "Value"}, "role": {"dtype": "string", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ssj500k", "config_name": "semantic_role_labeling", "version": {"version_str": "2.3.0", "description": null, "major": 2, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9901320, "num_examples": 5523, "dataset_name": "ssj500k"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip": {"num_bytes": 13021836, "checksum": "08ac4d6cf74a45bc81f6e9ca53e7406c96c906c218cbb8ff2f7365e96655c460"}}, "download_size": 13021836, "post_processing_size": null, "dataset_size": 9901320, "size_in_bytes": 22923156}, "multiword_expressions": {"description": "The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,\nsentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated \nwith syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also \nannotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the \nJOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.\n", "citation": "@InProceedings{krek2020ssj500k,\ntitle = {The ssj500k Training Corpus for Slovene Language Processing},\nauthor={Krek, Simon and Erjavec, Toma\u017e and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and \u010cibej, Jaka and Brank, Janez},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},\nyear={2020},\npages={24-33}\n}\n", "homepage": "http://hdl.handle.net/11356/1434", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "idx_par": {"dtype": "int32", "id": null, "_type": "Value"}, "idx_sent": {"dtype": "int32", "id": null, "_type": "Value"}, "id_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msds": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "mwe_info": [{"type": {"dtype": "string", "id": null, "_type": "Value"}, "word_indices": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ssj500k", "config_name": "multiword_expressions", "version": {"version_str": "2.3.0", "description": null, "major": 2, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 24215008, "num_examples": 13516, "dataset_name": "ssj500k"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip": {"num_bytes": 13021836, "checksum": "08ac4d6cf74a45bc81f6e9ca53e7406c96c906c218cbb8ff2f7365e96655c460"}}, "download_size": 13021836, "post_processing_size": null, "dataset_size": 24215008, "size_in_bytes": 37236844}}
|
|
|
|
dependency_parsing_jos/ssj500k-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9a7f5d91a5548fddb0c6f7f28d3577383af1af421679fe98cb4b8e2c9a32b803
|
3 |
+
size 4398254
|
dependency_parsing_ud/ssj500k-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f671d8463135ffc9f99dd2fca3fd7ed98fade1a97831b33c8f47fb172445700d
|
3 |
+
size 2703455
|
multiword_expressions/ssj500k-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:34a20fd196b50025543b07a409dc5b2291d8863c4725acaa393bc70e2658afe3
|
3 |
+
size 4824336
|
named_entity_recognition/ssj500k-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:55db8174995d120f746a7ceed735b9b9f1a0423b714b132277342b3e4fc6196e
|
3 |
+
size 3363193
|
semantic_role_labeling/ssj500k-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01d3a963a82213309b5243302ece341172349fddbf4a250257db2f0d2d280dae
|
3 |
+
size 2000403
|
ssj500k.py
DELETED
@@ -1,365 +0,0 @@
|
|
1 |
-
"""ssj500k is a partially annotated training corpus for multiple syntactic and semantic tasks."""
|
2 |
-
import re
|
3 |
-
import xml.etree.ElementTree as ET
|
4 |
-
import os
|
5 |
-
|
6 |
-
import datasets
|
7 |
-
|
8 |
-
|
9 |
-
_CITATION = """\
|
10 |
-
@InProceedings{krek2020ssj500k,
|
11 |
-
title = {The ssj500k Training Corpus for Slovene Language Processing},
|
12 |
-
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
|
13 |
-
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
|
14 |
-
year={2020},
|
15 |
-
pages={24-33}
|
16 |
-
}
|
17 |
-
"""
|
18 |
-
|
19 |
-
_DESCRIPTION = """\
|
20 |
-
The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,
|
21 |
-
sentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated
|
22 |
-
with syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also
|
23 |
-
annotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the
|
24 |
-
JOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies.
|
25 |
-
"""
|
26 |
-
|
27 |
-
_HOMEPAGE = "http://hdl.handle.net/11356/1434"
|
28 |
-
|
29 |
-
_LICENSE = "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)"
|
30 |
-
|
31 |
-
_URLS = {
|
32 |
-
"ssj500k-en.tei": "https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1434/ssj500k-en.TEI.zip"
|
33 |
-
}
|
34 |
-
|
35 |
-
|
36 |
-
XML_NAMESPACE = "{http://www.w3.org/XML/1998/namespace}"
|
37 |
-
IDX_ROOT_WORD = -1
|
38 |
-
IDX_NA_HEAD = -2
|
39 |
-
NA_TAG = "N/A"
|
40 |
-
|
41 |
-
|
42 |
-
def namespace(element):
|
43 |
-
# https://stackoverflow.com/a/12946675
|
44 |
-
m = re.match(r'\{.*\}', element.tag)
|
45 |
-
return m.group(0) if m else ''
|
46 |
-
|
47 |
-
|
48 |
-
def word_information(w_or_pc_el):
|
49 |
-
if w_or_pc_el.tag.endswith("pc"):
|
50 |
-
id_word = w_or_pc_el.attrib[f"{XML_NAMESPACE}id"]
|
51 |
-
form = w_or_pc_el.text.strip()
|
52 |
-
lemma = w_or_pc_el.text.strip()
|
53 |
-
msd = w_or_pc_el.attrib[f"msd"]
|
54 |
-
else: # word - w
|
55 |
-
id_word = w_or_pc_el.attrib[f"{XML_NAMESPACE}id"]
|
56 |
-
form = w_or_pc_el.text.strip()
|
57 |
-
lemma = w_or_pc_el.attrib["lemma"]
|
58 |
-
msd = w_or_pc_el.attrib[f"msd"]
|
59 |
-
|
60 |
-
return id_word, form, lemma, msd
|
61 |
-
|
62 |
-
|
63 |
-
class Ssj500k(datasets.GeneratorBasedBuilder):
|
64 |
-
"""ssj500k is a partially annotated training corpus for multiple syntactic and semantic tasks."""
|
65 |
-
|
66 |
-
VERSION = datasets.Version("2.3.0")
|
67 |
-
|
68 |
-
BUILDER_CONFIGS = [
|
69 |
-
datasets.BuilderConfig(name="all_data", version=VERSION,
|
70 |
-
description="The entire dataset with all annotations, in some cases partially missing."),
|
71 |
-
datasets.BuilderConfig(name="named_entity_recognition", version=VERSION,
|
72 |
-
description="The data subset with annotated named entities."),
|
73 |
-
datasets.BuilderConfig(name="dependency_parsing_ud", version=VERSION,
|
74 |
-
description="The data subset with annotated dependencies (UD schema)."),
|
75 |
-
datasets.BuilderConfig(name="dependency_parsing_jos", version=VERSION,
|
76 |
-
description="The data subset with annotated dependencies (JOS schema)."),
|
77 |
-
datasets.BuilderConfig(name="semantic_role_labeling", version=VERSION,
|
78 |
-
description="The data subset with annotated semantic roles."),
|
79 |
-
datasets.BuilderConfig(name="multiword_expressions", version=VERSION,
|
80 |
-
description="The data subset with annotated named entities.")
|
81 |
-
]
|
82 |
-
|
83 |
-
DEFAULT_CONFIG_NAME = "all_data"
|
84 |
-
|
85 |
-
def _info(self):
|
86 |
-
features_dict = {
|
87 |
-
"id_doc": datasets.Value("string"),
|
88 |
-
"idx_par": datasets.Value("int32"),
|
89 |
-
"idx_sent": datasets.Value("int32"),
|
90 |
-
"id_words": datasets.Sequence(datasets.Value("string")),
|
91 |
-
"words": datasets.Sequence(datasets.Value("string")),
|
92 |
-
"lemmas": datasets.Sequence(datasets.Value("string")),
|
93 |
-
"msds": datasets.Sequence(datasets.Value("string"))
|
94 |
-
}
|
95 |
-
|
96 |
-
ret_all_data = self.config.name == "all_data"
|
97 |
-
if ret_all_data:
|
98 |
-
features_dict.update({
|
99 |
-
"has_ne_ann": datasets.Value("bool"), "has_ud_dep_ann": datasets.Value("bool"),
|
100 |
-
"has_jos_dep_ann": datasets.Value("bool"), "has_srl_ann": datasets.Value("bool"),
|
101 |
-
"has_mwe_ann": datasets.Value("bool")
|
102 |
-
})
|
103 |
-
|
104 |
-
if ret_all_data or self.config.name == "named_entity_recognition":
|
105 |
-
features_dict["ne_tags"] = datasets.Sequence(datasets.Value("string"))
|
106 |
-
|
107 |
-
if ret_all_data or self.config.name == "dependency_parsing_ud":
|
108 |
-
features_dict.update({
|
109 |
-
"ud_dep_head": datasets.Sequence(datasets.Value("int32")),
|
110 |
-
"ud_dep_rel": datasets.Sequence(datasets.Value("string"))
|
111 |
-
})
|
112 |
-
|
113 |
-
if ret_all_data or self.config.name == "dependency_parsing_jos":
|
114 |
-
features_dict.update({
|
115 |
-
"jos_dep_head": datasets.Sequence(datasets.Value("int32")),
|
116 |
-
"jos_dep_rel": datasets.Sequence(datasets.Value("string"))
|
117 |
-
})
|
118 |
-
|
119 |
-
if ret_all_data or self.config.name == "semantic_role_labeling":
|
120 |
-
features_dict.update({
|
121 |
-
"srl_info": [{
|
122 |
-
"idx_arg": datasets.Value("uint32"),
|
123 |
-
"idx_head": datasets.Value("uint32"),
|
124 |
-
"role": datasets.Value("string")
|
125 |
-
}]
|
126 |
-
})
|
127 |
-
|
128 |
-
if ret_all_data or self.config.name == "multiword_expressions":
|
129 |
-
features_dict["mwe_info"] = [{
|
130 |
-
"type": datasets.Value("string"),
|
131 |
-
"word_indices": datasets.Sequence(datasets.Value("uint32"))
|
132 |
-
}]
|
133 |
-
|
134 |
-
features = datasets.Features(features_dict)
|
135 |
-
return datasets.DatasetInfo(
|
136 |
-
description=_DESCRIPTION,
|
137 |
-
features=features,
|
138 |
-
homepage=_HOMEPAGE,
|
139 |
-
license=_LICENSE,
|
140 |
-
citation=_CITATION,
|
141 |
-
)
|
142 |
-
|
143 |
-
def _split_generators(self, dl_manager):
|
144 |
-
urls = _URLS["ssj500k-en.tei"]
|
145 |
-
data_dir = dl_manager.download_and_extract(urls)
|
146 |
-
return [
|
147 |
-
datasets.SplitGenerator(
|
148 |
-
name=datasets.Split.TRAIN,
|
149 |
-
gen_kwargs={"file_path": os.path.join(data_dir, "ssj500k-en.TEI", "ssj500k-en.body.xml")}
|
150 |
-
)
|
151 |
-
]
|
152 |
-
|
153 |
-
def _generate_examples(self, file_path):
|
154 |
-
ret_all_data = self.config.name == "all_data"
|
155 |
-
ret_ne_only = self.config.name == "named_entity_recognition"
|
156 |
-
ret_ud_dep_only = self.config.name == "dependency_parsing_ud"
|
157 |
-
ret_jos_dep_only = self.config.name == "dependency_parsing_jos"
|
158 |
-
ret_srl_only = self.config.name == "semantic_role_labeling"
|
159 |
-
ret_mwe_only = self.config.name == "multiword_expressions"
|
160 |
-
|
161 |
-
curr_doc = ET.parse(file_path)
|
162 |
-
root = curr_doc.getroot()
|
163 |
-
NAMESPACE = namespace(root)
|
164 |
-
|
165 |
-
idx_example = 0
|
166 |
-
for idx_doc, curr_doc in enumerate(root.iterfind(f"{NAMESPACE}div")):
|
167 |
-
id_doc = curr_doc.attrib[f"{XML_NAMESPACE}id"]
|
168 |
-
doc_metadata = {}
|
169 |
-
metadata_el = curr_doc.find(f"{NAMESPACE}bibl")
|
170 |
-
if metadata_el is not None:
|
171 |
-
for child in metadata_el:
|
172 |
-
if child.tag.endswith("term"):
|
173 |
-
if child.attrib[f"{XML_NAMESPACE}lang"] != "en":
|
174 |
-
continue
|
175 |
-
|
176 |
-
parts = child.text.strip().split(" / ")
|
177 |
-
attr_name = parts[0]
|
178 |
-
attr_value = " / ".join(parts[1:])
|
179 |
-
|
180 |
-
elif child.tag.endswith("note"):
|
181 |
-
attr_name = child.attrib["type"]
|
182 |
-
attr_value = child.text.strip()
|
183 |
-
else:
|
184 |
-
attr_name = child.tag[len(NAMESPACE):]
|
185 |
-
attr_value = child.text.strip()
|
186 |
-
|
187 |
-
doc_metadata[attr_name] = attr_value
|
188 |
-
|
189 |
-
# IMPORTANT: This is a hack, because it is not clear which documents are annotated with NEs
|
190 |
-
# The numbers of annotated docs are obtained from the paper provided in `_CITATION` (Table 1)
|
191 |
-
has_ne = idx_doc < 498
|
192 |
-
has_mwe = idx_doc < 754
|
193 |
-
has_srl = idx_doc < 228
|
194 |
-
|
195 |
-
for idx_par, curr_par in enumerate(curr_doc.iterfind(f"{NAMESPACE}p")):
|
196 |
-
for idx_sent, curr_sent in enumerate(curr_par.iterfind(f"{NAMESPACE}s")):
|
197 |
-
id2position = {}
|
198 |
-
id_words, words, lemmas, msds = [], [], [], []
|
199 |
-
|
200 |
-
# Optional (partial) annotations
|
201 |
-
named_ents = []
|
202 |
-
has_ud_dep, ud_dep_heads, ud_dep_rels = False, [], []
|
203 |
-
has_jos_dep, jos_dep_heads, jos_dep_rels = False, [], []
|
204 |
-
srl_info = []
|
205 |
-
mwe_info = []
|
206 |
-
|
207 |
-
# Note: assuming that all words of a sentence are observed before processing the optional annotations
|
208 |
-
# i.e., that <w> and <pc> elements come first, then the optional <linkGroup> annotations
|
209 |
-
for curr_el in curr_sent:
|
210 |
-
# Words
|
211 |
-
if curr_el.tag.endswith(("w", "pc")):
|
212 |
-
id_word, word, lemma, msd = word_information(curr_el)
|
213 |
-
|
214 |
-
id2position[id_word] = len(id2position)
|
215 |
-
id_words.append(id_word)
|
216 |
-
words.append(word)
|
217 |
-
lemmas.append(lemma)
|
218 |
-
msds.append(msd)
|
219 |
-
named_ents.append("O")
|
220 |
-
|
221 |
-
# Named entities
|
222 |
-
elif curr_el.tag.endswith("seg"):
|
223 |
-
has_ne = True
|
224 |
-
ne_type = curr_el.attrib["subtype"] # {"per", "loc", "org", "misc", "deriv-per"}
|
225 |
-
if ne_type.startswith("deriv-"):
|
226 |
-
ne_type = ne_type[len("deriv-"):]
|
227 |
-
ne_type = ne_type.upper()
|
228 |
-
|
229 |
-
num_ne_tokens = 0
|
230 |
-
for curr_child in curr_el:
|
231 |
-
num_ne_tokens += 1
|
232 |
-
id_word, word, lemma, msd = word_information(curr_child)
|
233 |
-
|
234 |
-
id2position[id_word] = len(id2position)
|
235 |
-
id_words.append(id_word)
|
236 |
-
words.append(word)
|
237 |
-
lemmas.append(lemma)
|
238 |
-
msds.append(msd)
|
239 |
-
|
240 |
-
assert num_ne_tokens > 0
|
241 |
-
nes = [f"B-{ne_type.upper()}"] + [f"I-{ne_type.upper()}" for _ in range(num_ne_tokens - 1)]
|
242 |
-
named_ents.extend(nes)
|
243 |
-
|
244 |
-
elif curr_el.tag.endswith("linkGrp"):
|
245 |
-
# UD dependencies
|
246 |
-
if curr_el.attrib["type"] == "UD-SYN":
|
247 |
-
has_ud_dep = True
|
248 |
-
ud_dep_heads = [None for _ in range(len(words))]
|
249 |
-
ud_dep_rels = [None for _ in range(len(words))]
|
250 |
-
|
251 |
-
for link in curr_el:
|
252 |
-
dep_rel = link.attrib["ana"].split(":")[-1]
|
253 |
-
id_head_word, id_dependant = tuple(map(
|
254 |
-
lambda _t_id: _t_id[1:] if _t_id.startswith("#") else _t_id,
|
255 |
-
link.attrib["target"].split(" ")
|
256 |
-
))
|
257 |
-
|
258 |
-
idx_head_word = id2position[id_head_word] if dep_rel != "root" else IDX_ROOT_WORD
|
259 |
-
idx_dep_word = id2position[id_dependant]
|
260 |
-
|
261 |
-
ud_dep_heads[idx_dep_word] = idx_head_word
|
262 |
-
ud_dep_rels[idx_dep_word] = dep_rel
|
263 |
-
|
264 |
-
# JOS dependencies
|
265 |
-
elif curr_el.attrib["type"] == "JOS-SYN":
|
266 |
-
has_jos_dep = True
|
267 |
-
jos_dep_heads = [None for _ in range(len(words))]
|
268 |
-
jos_dep_rels = [None for _ in range(len(words))]
|
269 |
-
|
270 |
-
for link in curr_el:
|
271 |
-
dep_rel = link.attrib["ana"].split(":")[-1]
|
272 |
-
id_head_word, id_dependant = tuple(map(
|
273 |
-
lambda _t_id: _t_id[1:] if _t_id.startswith("#") else _t_id,
|
274 |
-
link.attrib["target"].split(" ")
|
275 |
-
))
|
276 |
-
|
277 |
-
idx_head_word = id2position[id_head_word] if dep_rel != "Root" else IDX_ROOT_WORD
|
278 |
-
idx_dep_word = id2position[id_dependant]
|
279 |
-
|
280 |
-
jos_dep_heads[idx_dep_word] = idx_head_word
|
281 |
-
jos_dep_rels[idx_dep_word] = dep_rel
|
282 |
-
|
283 |
-
# Semantic role labels
|
284 |
-
elif curr_el.attrib["type"] == "SRL":
|
285 |
-
for link in curr_el:
|
286 |
-
sem_role = link.attrib["ana"].split(":")[-1]
|
287 |
-
id_head_word, id_arg_word = tuple(map(
|
288 |
-
lambda _t_id: _t_id[1:] if _t_id.startswith("#") else _t_id,
|
289 |
-
link.attrib["target"].split(" ")
|
290 |
-
))
|
291 |
-
idx_head_word = id2position[id_head_word]
|
292 |
-
idx_arg_word = id2position[id_arg_word]
|
293 |
-
|
294 |
-
srl_info.append({
|
295 |
-
"idx_arg": idx_arg_word,
|
296 |
-
"idx_head": idx_head_word,
|
297 |
-
"role": sem_role
|
298 |
-
})
|
299 |
-
|
300 |
-
# Multi-word expressions
|
301 |
-
elif curr_el.attrib["type"] == "MWE":
|
302 |
-
has_mwe = True
|
303 |
-
# Follow the KOMET/G-KOMET format, i.e. list of {"type": ..., "word_indices": ...}
|
304 |
-
for link in curr_el:
|
305 |
-
mwe_type = link.attrib["ana"].split(":")[-1]
|
306 |
-
involved_words = list(map(
|
307 |
-
lambda _t_id: _t_id[1:] if _t_id.startswith("#") else _t_id,
|
308 |
-
link.attrib["target"].split(" "))
|
309 |
-
)
|
310 |
-
word_indices = [id2position[_curr_tok] for _curr_tok in involved_words]
|
311 |
-
mwe_info.append({"type": mwe_type, "word_indices": word_indices})
|
312 |
-
|
313 |
-
# Specified config expects only annotated instances, but there are none for the current instance
|
314 |
-
if (ret_ne_only and not has_ne) or (ret_ud_dep_only and not has_ud_dep) or \
|
315 |
-
(ret_jos_dep_only and not has_jos_dep) or (ret_srl_only and not has_srl) or \
|
316 |
-
(ret_mwe_only and not has_mwe):
|
317 |
-
continue
|
318 |
-
|
319 |
-
instance_dict = {
|
320 |
-
"id_doc": id_doc,
|
321 |
-
"idx_par": idx_par,
|
322 |
-
"idx_sent": idx_sent,
|
323 |
-
"id_words": id_words,
|
324 |
-
"words": words,
|
325 |
-
"lemmas": lemmas,
|
326 |
-
"msds": msds
|
327 |
-
}
|
328 |
-
|
329 |
-
if ret_ne_only or ret_all_data:
|
330 |
-
if not has_ne:
|
331 |
-
named_ents = [NA_TAG for _ in range(len(words))]
|
332 |
-
|
333 |
-
instance_dict["ne_tags"] = named_ents
|
334 |
-
|
335 |
-
if ret_ud_dep_only or ret_all_data:
|
336 |
-
if not has_ud_dep:
|
337 |
-
ud_dep_heads = [IDX_NA_HEAD for _ in range(len(words))]
|
338 |
-
ud_dep_rels = [NA_TAG for _ in range(len(words))]
|
339 |
-
|
340 |
-
instance_dict["ud_dep_head"] = ud_dep_heads
|
341 |
-
instance_dict["ud_dep_rel"] = ud_dep_rels
|
342 |
-
|
343 |
-
if ret_jos_dep_only or ret_all_data:
|
344 |
-
if not has_jos_dep:
|
345 |
-
jos_dep_heads = [IDX_NA_HEAD for _ in range(len(words))]
|
346 |
-
jos_dep_rels = [NA_TAG for _ in range(len(words))]
|
347 |
-
|
348 |
-
instance_dict["jos_dep_head"] = jos_dep_heads
|
349 |
-
instance_dict["jos_dep_rel"] = jos_dep_rels
|
350 |
-
|
351 |
-
if ret_srl_only or ret_all_data:
|
352 |
-
instance_dict["srl_info"] = srl_info
|
353 |
-
|
354 |
-
if ret_mwe_only or ret_all_data:
|
355 |
-
instance_dict["mwe_info"] = mwe_info
|
356 |
-
|
357 |
-
# When all data is returned, some instances are unannotated or partially annotated, mark instances with flags
|
358 |
-
if ret_all_data:
|
359 |
-
instance_dict.update({
|
360 |
-
"has_ne_ann": has_ne, "has_ud_dep_ann": has_ud_dep, "has_jos_dep_ann": has_jos_dep,
|
361 |
-
"has_srl_ann": has_srl, "has_mwe_ann": has_mwe
|
362 |
-
})
|
363 |
-
|
364 |
-
yield idx_example, instance_dict
|
365 |
-
idx_example += 1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|