parquet-converter commited on
Commit
b02bb24
1 Parent(s): 884b744

Update parquet files

Browse files
Files changed (5) hide show
  1. .gitattributes +0 -53
  2. README.md +0 -159
  3. dataset_infos.json +0 -1
  4. default/vuamc-train.parquet +3 -0
  5. vuamc.py +0 -358
.gitattributes DELETED
@@ -1,53 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tflite filter=lfs diff=lfs merge=lfs -text
29
- *.tgz filter=lfs diff=lfs merge=lfs -text
30
- *.wasm filter=lfs diff=lfs merge=lfs -text
31
- *.xz filter=lfs diff=lfs merge=lfs -text
32
- *.zip filter=lfs diff=lfs merge=lfs -text
33
- *.zst filter=lfs diff=lfs merge=lfs -text
34
- *tfevents* filter=lfs diff=lfs merge=lfs -text
35
- # Audio files - uncompressed
36
- *.pcm filter=lfs diff=lfs merge=lfs -text
37
- *.sam filter=lfs diff=lfs merge=lfs -text
38
- *.raw filter=lfs diff=lfs merge=lfs -text
39
- # Audio files - compressed
40
- *.aac filter=lfs diff=lfs merge=lfs -text
41
- *.flac filter=lfs diff=lfs merge=lfs -text
42
- *.mp3 filter=lfs diff=lfs merge=lfs -text
43
- *.ogg filter=lfs diff=lfs merge=lfs -text
44
- *.wav filter=lfs diff=lfs merge=lfs -text
45
- # Image files - uncompressed
46
- *.bmp filter=lfs diff=lfs merge=lfs -text
47
- *.gif filter=lfs diff=lfs merge=lfs -text
48
- *.png filter=lfs diff=lfs merge=lfs -text
49
- *.tiff filter=lfs diff=lfs merge=lfs -text
50
- # Image files - compressed
51
- *.jpg filter=lfs diff=lfs merge=lfs -text
52
- *.jpeg filter=lfs diff=lfs merge=lfs -text
53
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,159 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- pretty_name: VUA Metaphor Corpus
13
- size_categories:
14
- - 10K<n<100K
15
- - 100K<n<1M
16
- source_datasets: []
17
- tags:
18
- - metaphor-classification
19
- - multiword-expression-detection
20
- - vua20
21
- - vua18
22
- - mipvu
23
- task_categories:
24
- - text-classification
25
- - token-classification
26
- task_ids:
27
- - multi-class-classification
28
- ---
29
-
30
- # Dataset Card for VUA Metaphor Corpus
31
-
32
- **Important note#1**: This is a slightly simplified but mostly complete parse of the corpus. What is missing are lemmas and some metadata that was not important at the time of writing the parser. See the section `Simplifications` for more information on this.
33
-
34
- **Important note#2**: The dataset contains metadata - to ignore it and correctly remap the annotations, see the section `Discarding metadata`.
35
-
36
- ### Dataset Summary
37
-
38
- VUA Metaphor Corpus (VUAMC) contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor. There are four registers, each comprising about 50 000 words: academic texts, news texts, fiction, and conversations.
39
- Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor.
40
-
41
- ### Supported Tasks and Leaderboards
42
-
43
- Metaphor detection, metaphor type classification.
44
-
45
- ### Languages
46
-
47
- English.
48
-
49
- ## Dataset Structure
50
-
51
- ### Data Instances
52
-
53
- A sample instance from the dataset:
54
- ```
55
- {
56
- 'document_name': 'kcv-fragment42',
57
- 'words': ['', 'I', 'think', 'we', 'should', 'have', 'different', 'holidays', '.'],
58
- 'pos_tags': ['N/A', 'PNP', 'VVB', 'PNP', 'VM0', 'VHI', 'AJ0', 'NN2', 'PUN'],
59
- 'met_type': [
60
- {'type': 'mrw/met', 'word_indices': [5]}
61
- ],
62
- 'meta': ['vocal/laugh', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A']
63
- }
64
- ```
65
-
66
- ### Data Fields
67
-
68
- The instances are ordered as they appear in the corpus.
69
-
70
- - `document_name`: a string containing the name of the document in which the sentence appears;
71
- - `words`: words in the sentence (`""` when the word represents metadata);
72
- - `pos_tags`: POS tags of the words, encoded using the BNC basic tagset (`"N/A"` when the word does not have an associated POS tag);
73
- - `met_type`: metaphors in the sentence, marked by their type and word indices;
74
- - `meta`: selected metadata tags providing additional context to the sentence. Metadata may not correspond to a specific word. In this case, the metadata is represented with an empty string (`""`) in `words` and a `"N/A"` tag in `pos_tags`.
75
-
76
- ## Dataset Creation
77
-
78
- For detailed information on the corpus, please check out the references in the `Citation Information` section or contact the dataset authors.
79
-
80
- ## Simplifications
81
- The raw corpus is equipped with rich metadata and encoded in the TEI XML format. The textual part is fully parsed except for the lemmas, i.e. all the sentences in the raw corpus are present in the dataset.
82
- However, parsing the metadata fully is unnecessarily tedious, so certain simplifications were made:
83
- - paragraph information is not preserved as the dataset is parsed at sentence level;
84
- - manual corrections (`<corr>`) of incorrectly written words are ignored, and the original, incorrect form of the words is used instead;
85
- - `<ptr>` and `<anchor>` tags are ignored as I cannot figure out what they represent;
86
- - the attributes `rendition` (in `<hi>` tags) and `new` (in `<shift>` tags) are not exposed.
87
-
88
- ## Discarding metadata
89
-
90
- The dataset contains rich metadata, which is stored in the `meta` attribute. To keep data aligned, empty words or `"N/A"`s are inserted into the other attributes. If you want to ignore the metadata and correct the metaphor type annotations, you can use code similar to the following snippet:
91
- ```python3
92
- data = datasets.load_dataset("matejklemen/vuamc")["train"]
93
- data = data.to_pandas()
94
-
95
- for idx_ex in range(data.shape[0]):
96
- curr_ex = data.iloc[idx_ex]
97
-
98
- idx_remap = {}
99
- for idx_word, word in enumerate(curr_ex["words"]):
100
- if len(word) != 0:
101
- idx_remap[idx_word] = len(idx_remap)
102
-
103
- # Note that lists are stored as np arrays by datasets, while we are storing new data in a list!
104
- # (unhandled for simplicity)
105
- words, pos_tags, met_type = curr_ex[["words", "pos_tags", "met_type"]].tolist()
106
- if len(idx_remap) != len(curr_ex["words"]):
107
- words = list(filter(lambda _word: len(_word) > 0, curr_ex["words"]))
108
- pos_tags = list(filter(lambda _pos: _pos != "N/A", curr_ex["pos_tags"]))
109
- met_type = []
110
-
111
- for met_info in curr_ex["met_type"]:
112
- met_type.append({
113
- "type": met_info["type"],
114
- "word_indices": list(map(lambda _i: idx_remap[_i], met_info["word_indices"]))
115
- })
116
- ```
117
-
118
- ## Additional Information
119
-
120
- ### Dataset Curators
121
-
122
- Gerard Steen; et al. (please see http://hdl.handle.net/20.500.12024/2541 for the full list).
123
-
124
- ### Licensing Information
125
-
126
- Available for non-commercial use on condition that the terms of the [BNC Licence](http://www.natcorp.ox.ac.uk/docs/licence.html) are observed and that this header is included in its entirety with any copy distributed.
127
-
128
- ### Citation Information
129
-
130
- ```
131
- @book{steen2010method,
132
- title={A method for linguistic metaphor identification: From MIP to MIPVU},
133
- author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje},
134
- volume={14},
135
- year={2010},
136
- publisher={John Benjamins Publishing}
137
- }
138
- ```
139
-
140
- ```
141
- @inproceedings{leong-etal-2020-report,
142
- title = "A Report on the 2020 {VUA} and {TOEFL} Metaphor Detection Shared Task",
143
- author = "Leong, Chee Wee (Ben) and
144
- Beigman Klebanov, Beata and
145
- Hamill, Chris and
146
- Stemle, Egon and
147
- Ubale, Rutuja and
148
- Chen, Xianyang",
149
- booktitle = "Proceedings of the Second Workshop on Figurative Language Processing",
150
- year = "2020",
151
- url = "https://aclanthology.org/2020.figlang-1.3",
152
- doi = "10.18653/v1/2020.figlang-1.3",
153
- pages = "18--29"
154
- }
155
- ```
156
-
157
- ### Contributions
158
-
159
- Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "The resource contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor. \nThere are four registers, each comprising about 50,000 words: academic texts, news texts, fiction, and conversations. \nWords have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for \nmetaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal \nmetaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made \nbetween clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of \nmetaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor.\n", "citation": "@book{steen2010method,\n title={A method for linguistic metaphor identification: From MIP to MIPVU},\n author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje},\n volume={14},\n year={2010},\n publisher={John Benjamins Publishing}\n}\n", "homepage": "https://hdl.handle.net/20.500.12024/2541", "license": "Available for non-commercial use on condition that the terms of the BNC Licence are observed and that this header is included in its entirety with any copy distributed.", "features": {"document_name": {"dtype": "string", "id": null, "_type": "Value"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "met_type": [{"type": {"dtype": "string", "id": null, "_type": "Value"}, "word_indices": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}], "meta": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "vuamc", "config_name": "default", "version": {"version_str": "1.0.1", "description": null, "major": 1, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 6487858, "num_examples": 16740, "dataset_name": "vuamc"}}, "download_checksums": {"https://ota.bodleian.ox.ac.uk/repository/xmlui/bitstream/handle/20.500.12024/2541/VUAMC.xml": {"num_bytes": 16820946, "checksum": "0ac1a77cc1879aa0c87e2879481d0e1e3f28e36b1701893c096a33ff11aa6e0d"}}, "download_size": 16820946, "post_processing_size": null, "dataset_size": 6487858, "size_in_bytes": 23308804}}
 
 
default/vuamc-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25e21a435278ed4c87ef660b886868be6dad2ad2678cfef1d8f6fc5f89bf0696
3
+ size 1121107
vuamc.py DELETED
@@ -1,358 +0,0 @@
1
- """ English metaphor-annotated corpus. """
2
-
3
- import os
4
- from copy import deepcopy
5
-
6
- import datasets
7
- import logging
8
- import re
9
-
10
- import xml.etree.ElementTree as ET
11
- from typing import List, Tuple, Dict
12
-
13
-
14
- _CITATION = """\
15
- @book{steen2010method,
16
- title={A method for linguistic metaphor identification: From MIP to MIPVU},
17
- author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje},
18
- volume={14},
19
- year={2010},
20
- publisher={John Benjamins Publishing}
21
- }
22
- """
23
-
24
- _DESCRIPTION = """\
25
- The resource contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor.
26
- There are four registers, each comprising about 50,000 words: academic texts, news texts, fiction, and conversations.
27
- Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for
28
- metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal
29
- metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made
30
- between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of
31
- metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor.
32
- """
33
-
34
- _HOMEPAGE = "https://hdl.handle.net/20.500.12024/2541"
35
-
36
- _LICENSE = "Available for non-commercial use on condition that the terms of the BNC Licence are observed and that " \
37
- "this header is included in its entirety with any copy distributed."
38
-
39
- _URLS = {
40
- "vuamc": "https://ota.bodleian.ox.ac.uk/repository/xmlui/bitstream/handle/20.500.12024/2541/VUAMC.xml"
41
- }
42
-
43
-
44
- XML_NAMESPACE = "{http://www.w3.org/XML/1998/namespace}"
45
- VICI_NAMESPACE = "{http://www.tei-c.org/ns/VICI}"
46
- NA_STR = "N/A"
47
-
48
-
49
- def namespace(element):
50
- # https://stackoverflow.com/a/12946675
51
- m = re.match(r'\{.*\}', element.tag)
52
- return m.group(0) if m else ''
53
-
54
-
55
- def resolve_recursively(el, ns):
56
- words, pos_tags, met_type, meta_tags = [], [], [], []
57
-
58
- if el.tag.endswith("w"):
59
- # A <w>ord may be
60
- # (1) just text,
61
- # (2) a metaphor (text fully enclosed in another seg)
62
- # (3) a partial metaphor (optionally some text, followed by a seg, optionally followed by more text)
63
- idx_word = 0
64
- _w_text = el.text.strip() if el.text is not None else ""
65
- if len(_w_text) > 0:
66
- words.append(_w_text)
67
- pos_tags.append(el.attrib["type"])
68
- meta_tags.append(NA_STR)
69
- idx_word += 1
70
-
71
- met_els = el.findall(f"{ns}seg")
72
- for met_el in met_els:
73
- parse_tail = True
74
- if met_el.text is None:
75
- # Handle encoding inconsistency where the metaphor is encoded without a closing tag (I hate this format)
76
- # <w lemma="to" type="PRP"><seg function="mrw" type="met" vici:morph="n"/>to </w>
77
- parse_tail = False
78
- _w_text = met_el.tail.strip()
79
- else:
80
- _w_text = met_el.text.strip()
81
-
82
- curr_met_type = met_el.attrib[f"function"]
83
-
84
- # Let the user decide how they want to aggregate metaphors
85
- if "type" in met_el.attrib:
86
- curr_met_type = f"{curr_met_type}/{met_el.attrib['type']}"
87
-
88
- if "subtype" in met_el.attrib:
89
- curr_met_type = f"{curr_met_type}/{met_el.attrib['subtype']}"
90
-
91
- words.append(_w_text)
92
- pos_tags.append(el.attrib["type"])
93
- meta_tags.append(NA_STR)
94
-
95
- met_dict = {"type": curr_met_type, "word_indices": [idx_word]}
96
- # Multi-word metaphors are annotated with xml:id="..." or corresp="..."
97
- if f"{XML_NAMESPACE}id" in met_el.attrib:
98
- met_dict["id"] = met_el.attrib[f"{XML_NAMESPACE}id"]
99
- elif "corresp" in met_el.attrib:
100
- met_dict["id"] = met_el.attrib["corresp"][1:] # remove the "#" in front
101
-
102
- met_type.append(met_dict)
103
- idx_word += 1
104
-
105
- if not parse_tail:
106
- continue
107
-
108
- _w_text = met_el.tail.strip() if met_el.tail is not None else ""
109
- if len(_w_text) > 0:
110
- words.append(_w_text)
111
- pos_tags.append(el.attrib["type"])
112
- meta_tags.append(NA_STR)
113
- idx_word += 1
114
-
115
- elif el.tag.endswith("vocal"):
116
- desc_el = el.find(f"{ns}desc")
117
- description = desc_el.text.strip() if desc_el is not None else "unknown"
118
-
119
- words.append("")
120
- pos_tags.append(NA_STR)
121
- meta_tags.append(f"vocal/{description}") # vocal/<desc>
122
-
123
- elif el.tag.endswith("gap"):
124
- words.append("")
125
- pos_tags.append(NA_STR)
126
- meta_tags.append(f"gap/{el.attrib.get('reason', 'unclear')}") # gap/<reason>
127
-
128
- elif el.tag.endswith("incident"):
129
- desc_el = el.find(f"{ns}desc")
130
- description = desc_el.text.strip() if desc_el is not None else "unknown"
131
-
132
- words.append("")
133
- pos_tags.append(NA_STR)
134
- meta_tags.append(f"incident/{description}")
135
-
136
- elif el.tag.endswith("shift"):
137
- # TODO: this is not exposed
138
- new_state = el.attrib.get("new", "normal")
139
- children = list(iter(el))
140
- # NOTE: Intentionally skip shifts like this, without children:
141
- # <u who="#PS05E"> <shift new="crying"/> </u>
142
- if len(children) > 0:
143
- for w_el in el:
144
- _words, _pos, _mets, _metas = resolve_recursively(w_el, ns=ns)
145
- words.extend(_words)
146
- pos_tags.extend(_pos)
147
- meta_tags.extend(_metas)
148
-
149
- elif el.tag.endswith("seg"):
150
- # Direct <seg> descendant of a sentence indicates truncated text
151
- word_el = el.find(f"{ns}w")
152
-
153
- words.append(word_el.text.strip())
154
- pos_tags.append(word_el.attrib["type"])
155
- meta_tags.append(NA_STR)
156
-
157
- elif el.tag.endswith("pause"):
158
- words.append("")
159
- pos_tags.append(NA_STR)
160
- meta_tags.append(f"pause")
161
-
162
- elif el.tag.endswith("sic"):
163
- for w_el in el:
164
- _words, _pos, _mets, _metas = resolve_recursively(w_el, ns=ns)
165
- words.extend(_words)
166
- pos_tags.extend(_pos)
167
- meta_tags.extend(_metas)
168
-
169
- elif el.tag.endswith("c"):
170
- words.append(el.text.strip())
171
- pos_tags.append(el.attrib["type"])
172
- meta_tags.append(NA_STR)
173
-
174
- elif el.tag.endswith("pb"):
175
- words.append("")
176
- pos_tags.append(NA_STR)
177
- meta_tags.append(NA_STR)
178
-
179
- elif el.tag.endswith("hi"):
180
- # TODO: this is not exposed
181
- rendition = el.attrib.get("rend", "normal")
182
-
183
- for child_el in el:
184
- _words, _pos, _mets, _metas = resolve_recursively(child_el, ns=ns)
185
- words.extend(_words)
186
- pos_tags.extend(_pos)
187
- meta_tags.extend(_metas)
188
-
189
- elif el.tag.endswith("choice"):
190
- sic_el = el.find(f"{ns}sic")
191
- _words, _pos, _mets, _metas = resolve_recursively(sic_el, ns=ns)
192
- words.extend(_words)
193
- pos_tags.extend(_pos)
194
- met_type.extend(_mets)
195
- meta_tags.extend(_metas)
196
-
197
- elif el.tag.endswith(("ptr", "corr")):
198
- # Intentionally skipping these:
199
- # - no idea what <ptr> is
200
- # - <sic> is being parsed instead of <corr>
201
- pass
202
-
203
- else:
204
- logging.warning(f"Unrecognized child element: {el.tag}.\n"
205
- f"If you are seeing this message, please open an issue on HF datasets.")
206
-
207
- return words, pos_tags, met_type, meta_tags
208
-
209
-
210
- def parse_sent(sent_el, ns) -> Tuple[List[str], List[str], List[Dict], List[str]]:
211
- all_words, all_pos_tags, all_met_types, all_metas = [], [], [], []
212
- for child_el in sent_el:
213
- word, pos, mtype, meta = resolve_recursively(child_el, ns=ns)
214
- # Need to remap local (index inside the word group) `word_indices` to global (index inside the sentence)
215
- if len(mtype) > 0:
216
- base = len(all_words)
217
- for idx_met, met_info in enumerate(mtype):
218
- mtype[idx_met]["word_indices"] = list(map(lambda _i: base + _i, met_info["word_indices"]))
219
-
220
- all_words.extend(word)
221
- all_pos_tags.extend(pos)
222
- all_met_types.extend(mtype)
223
- all_metas.extend(meta)
224
-
225
- # Check if any of the independent metaphor annotations belong to the same word group (e.g., "taking" and "over")
226
- if len(all_met_types) > 0:
227
- grouped_met_type = {}
228
- for met_info in all_met_types:
229
- curr_id = met_info.get("id", f"met{len(grouped_met_type)}")
230
-
231
- if curr_id in grouped_met_type:
232
- existing_data = grouped_met_type[curr_id]
233
- existing_data["word_indices"].extend(met_info["word_indices"])
234
- else:
235
- existing_data = deepcopy(met_info)
236
-
237
- grouped_met_type[curr_id] = existing_data
238
-
239
- new_met_types = []
240
- for _, met_info in grouped_met_type.items():
241
- if "id" in met_info:
242
- del met_info["id"]
243
- new_met_types.append(met_info)
244
-
245
- all_met_types = new_met_types
246
-
247
- return all_words, all_pos_tags, all_met_types, all_metas
248
-
249
-
250
- def parse_text_body(body_el, ns):
251
- all_words: List[List] = []
252
- all_pos: List[List] = []
253
- all_met_type: List[List] = []
254
- all_meta: List[List] = []
255
-
256
- # Edge case#1: <s>entence
257
- if body_el.tag.endswith("s"):
258
- words, pos_tags, met_types, meta_tags = parse_sent(body_el, ns=ns)
259
- all_words.append(words)
260
- all_pos.append(pos_tags)
261
- all_met_type.append(met_types)
262
- all_meta.append(meta_tags)
263
-
264
- # Edge case#2: <u>tterance either contains a sentence of metadata or contains multiple sentences as children
265
- elif body_el.tag.endswith("u"):
266
- children = list(filter(lambda _child: not _child.tag.endswith("ptr"), list(iter(body_el))))
267
- is_utterance_sent = all(map(lambda _child: not _child.tag.endswith("s"), children))
268
- if is_utterance_sent:
269
- # <u> contains elements as children that are not a <s>entence, so it is itself considered a sentence
270
- words, pos_tags, met_types, meta_tags = parse_sent(body_el, ns=ns)
271
- all_words.append(words)
272
- all_pos.append(pos_tags)
273
- all_met_type.append(met_types)
274
- all_meta.append(meta_tags)
275
- else:
276
- # <u> contains one or more of <s>entence children
277
- for _child in children:
278
- words, pos_tags, met_types, meta_tags = parse_sent(_child, ns=ns)
279
- all_words.append(words)
280
- all_pos.append(pos_tags)
281
- all_met_type.append(met_types)
282
- all_meta.append(meta_tags)
283
-
284
- # Recursively go deeper through all the <p>aragraphs, <div>s, etc. until we reach the sentences
285
- else:
286
- for _child in body_el:
287
- _c_word, _c_pos, _c_met, _c_meta = parse_text_body(_child, ns=ns)
288
-
289
- all_words.extend(_c_word)
290
- all_pos.extend(_c_pos)
291
- all_met_type.extend(_c_met)
292
- all_meta.extend(_c_meta)
293
-
294
- return all_words, all_pos, all_met_type, all_meta
295
-
296
-
297
- class VUAMC(datasets.GeneratorBasedBuilder):
298
- """English metaphor-annotated corpus. """
299
-
300
- VERSION = datasets.Version("1.0.1")
301
-
302
- def _info(self):
303
- features = datasets.Features(
304
- {
305
- "document_name": datasets.Value("string"),
306
- "words": datasets.Sequence(datasets.Value("string")),
307
- "pos_tags": datasets.Sequence(datasets.Value("string")),
308
- "met_type": [{
309
- "type": datasets.Value("string"),
310
- "word_indices": datasets.Sequence(datasets.Value("uint32"))
311
- }],
312
- "meta": datasets.Sequence(datasets.Value("string"))
313
- }
314
- )
315
-
316
- return datasets.DatasetInfo(
317
- description=_DESCRIPTION,
318
- features=features,
319
- homepage=_HOMEPAGE,
320
- license=_LICENSE,
321
- citation=_CITATION
322
- )
323
-
324
- def _split_generators(self, dl_manager):
325
- urls = _URLS["vuamc"]
326
- data_path = dl_manager.download_and_extract(urls)
327
- return [
328
- datasets.SplitGenerator(
329
- name=datasets.Split.TRAIN,
330
- gen_kwargs={"file_path": os.path.join(data_path)}
331
- )
332
- ]
333
-
334
- def _generate_examples(self, file_path):
335
- curr_doc = ET.parse(file_path)
336
- root = curr_doc.getroot()
337
- NAMESPACE = namespace(root)
338
- root = root.find(f"{NAMESPACE}text")
339
-
340
- idx_instance = 0
341
- for idx_doc, doc in enumerate(root.iterfind(f".//{NAMESPACE}text")):
342
- document_name = doc.attrib[f"{XML_NAMESPACE}id"]
343
- body = doc.find(f"{NAMESPACE}body")
344
- body_data = parse_text_body(body, ns=NAMESPACE)
345
-
346
- for sent_words, sent_pos, sent_met_type, sent_meta in zip(*body_data):
347
- # TODO: Due to some simplifications (not parsing certain metadata), some sentences may be empty
348
- if len(sent_words) == 0:
349
- continue
350
-
351
- yield idx_instance, {
352
- "document_name": document_name,
353
- "words": sent_words,
354
- "pos_tags": sent_pos,
355
- "met_type": sent_met_type,
356
- "meta": sent_meta
357
- }
358
- idx_instance += 1