Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
phucdev commited on
Commit
36655e7
·
1 Parent(s): 4eefd8c

Upload dataloading script and README.md

Browse files
Files changed (2) hide show
  1. README.md +273 -0
  2. fabner.py +213 -0
README.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: FabNER is a manufacturing text dataset for Named Entity Recognition.
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets: []
16
+ tags:
17
+ - manufacturing
18
+ - 2000-2020
19
+ task_categories:
20
+ - token-classification
21
+ task_ids:
22
+ - named-entity-recognition
23
+ dataset_info:
24
+ features:
25
+ - name: id
26
+ dtype: string
27
+ - name: tokens
28
+ sequence: string
29
+ - name: ner_tags
30
+ sequence:
31
+ class_label:
32
+ names:
33
+ '0': O
34
+ '1': B-MATE
35
+ '2': I-MATE
36
+ '3': O-MATE
37
+ '4': E-MATE
38
+ '5': S-MATE
39
+ '6': B-MANP
40
+ '7': I-MANP
41
+ '8': O-MANP
42
+ '9': E-MANP
43
+ '10': S-MANP
44
+ '11': B-MACEQ
45
+ '12': I-MACEQ
46
+ '13': O-MACEQ
47
+ '14': E-MACEQ
48
+ '15': S-MACEQ
49
+ '16': B-APPL
50
+ '17': I-APPL
51
+ '18': O-APPL
52
+ '19': E-APPL
53
+ '20': S-APPL
54
+ '21': B-FEAT
55
+ '22': I-FEAT
56
+ '23': O-FEAT
57
+ '24': E-FEAT
58
+ '25': S-FEAT
59
+ '26': B-PRO
60
+ '27': I-PRO
61
+ '28': O-PRO
62
+ '29': E-PRO
63
+ '30': S-PRO
64
+ '31': B-CHAR
65
+ '32': I-CHAR
66
+ '33': O-CHAR
67
+ '34': E-CHAR
68
+ '35': S-CHAR
69
+ '36': B-PARA
70
+ '37': I-PARA
71
+ '38': O-PARA
72
+ '39': E-PARA
73
+ '40': S-PARA
74
+ '41': B-ENAT
75
+ '42': I-ENAT
76
+ '43': O-ENAT
77
+ '44': E-ENAT
78
+ '45': S-ENAT
79
+ '46': B-CONPRI
80
+ '47': I-CONPRI
81
+ '48': O-CONPRI
82
+ '49': E-CONPRI
83
+ '50': S-CONPRI
84
+ '51': B-MANS
85
+ '52': I-MANS
86
+ '53': O-MANS
87
+ '54': E-MANS
88
+ '55': S-MANS
89
+ '56': B-BIOP
90
+ '57': I-BIOP
91
+ '58': O-BIOP
92
+ '59': E-BIOP
93
+ '60': S-BIOP
94
+ config_name: fabner
95
+ splits:
96
+ - name: train
97
+ num_bytes: 4394010
98
+ num_examples: 9435
99
+ - name: validation
100
+ num_bytes: 934347
101
+ num_examples: 2183
102
+ - name: test
103
+ num_bytes: 940136
104
+ num_examples: 2064
105
+ download_size: 3793613
106
+ dataset_size: 6268493
107
+ ---
108
+
109
+ # Dataset Card for [Dataset Name]
110
+
111
+ ## Table of Contents
112
+ - [Table of Contents](#table-of-contents)
113
+ - [Dataset Description](#dataset-description)
114
+ - [Dataset Summary](#dataset-summary)
115
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
116
+ - [Languages](#languages)
117
+ - [Dataset Structure](#dataset-structure)
118
+ - [Data Instances](#data-instances)
119
+ - [Data Fields](#data-fields)
120
+ - [Data Splits](#data-splits)
121
+ - [Dataset Creation](#dataset-creation)
122
+ - [Curation Rationale](#curation-rationale)
123
+ - [Source Data](#source-data)
124
+ - [Annotations](#annotations)
125
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
126
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
127
+ - [Social Impact of Dataset](#social-impact-of-dataset)
128
+ - [Discussion of Biases](#discussion-of-biases)
129
+ - [Other Known Limitations](#other-known-limitations)
130
+ - [Additional Information](#additional-information)
131
+ - [Dataset Curators](#dataset-curators)
132
+ - [Licensing Information](#licensing-information)
133
+ - [Citation Information](#citation-information)
134
+ - [Contributions](#contributions)
135
+
136
+ ## Dataset Description
137
+
138
+ - **Homepage:** [https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407](https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407)
139
+ - **Paper:** ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
140
+ - **Size of downloaded dataset files:** 3.79 MB
141
+ - **Size of the generated dataset:** 6.27 MB
142
+
143
+ ### Dataset Summary
144
+
145
+ FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
146
+ It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
147
+ science research.
148
+ For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
149
+ Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
150
+ Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
151
+ BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
152
+ B=Beginning, I-Intermediate, O=Outside, E=End, S=Single.
153
+
154
+ For details about the dataset, please refer to the paper: ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
155
+
156
+ ### Supported Tasks and Leaderboards
157
+
158
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
159
+
160
+ ### Languages
161
+
162
+ The language in the dataset is English.
163
+
164
+ ## Dataset Structure
165
+
166
+ ### Data Instances
167
+
168
+ - **Size of downloaded dataset files:** 3.79 MB
169
+ - **Size of the generated dataset:** 6.27 MB
170
+
171
+ An example of 'train' looks as follows:
172
+ ```json
173
+ {
174
+ "id": "0",
175
+ "tokens": ["Revealed", "the", "location-specific", "flow", "patterns", "and", "quantified", "the", "speeds", "of", "various", "types", "of", "flow", "."],
176
+ "ner_tags": [0, 0, 0, 46, 49, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
177
+ }
178
+ ```
179
+
180
+ ### Data Fields
181
+
182
+ - `id`: the instance id of this sentence, a `string` feature.
183
+ - `tokens`: the list of tokens of this sentence, a `list` of `string` features.
184
+ - `ner_tags`: the list of entity tags, a `list` of classification labels.
185
+
186
+ ```json
187
+ {"O": 0, "B-MATE": 1, "I-MATE": 2, "O-MATE": 3, "E-MATE": 4, "S-MATE": 5, "B-MANP": 6, "I-MANP": 7, "O-MANP": 8, "E-MANP": 9, "S-MANP": 10, "B-MACEQ": 11, "I-MACEQ": 12, "O-MACEQ": 13, "E-MACEQ": 14, "S-MACEQ": 15, "B-APPL": 16, "I-APPL": 17, "O-APPL": 18, "E-APPL": 19, "S-APPL": 20, "B-FEAT": 21, "I-FEAT": 22, "O-FEAT": 23, "E-FEAT": 24, "S-FEAT": 25, "B-PRO": 26, "I-PRO": 27, "O-PRO": 28, "E-PRO": 29, "S-PRO": 30, "B-CHAR": 31, "I-CHAR": 32, "O-CHAR": 33, "E-CHAR": 34, "S-CHAR": 35, "B-PARA": 36, "I-PARA": 37, "O-PARA": 38, "E-PARA": 39, "S-PARA": 40, "B-ENAT": 41, "I-ENAT": 42, "O-ENAT": 43, "E-ENAT": 44, "S-ENAT": 45, "B-CONPRI": 46, "I-CONPRI": 47, "O-CONPRI": 48, "E-CONPRI": 49, "S-CONPRI": 50, "B-MANS": 51, "I-MANS": 52, "O-MANS": 53, "E-MANS": 54, "S-MANS": 55, "B-BIOP": 56, "I-BIOP": 57, "O-BIOP": 58, "E-BIOP": 59, "S-BIOP": 60}
188
+ ```
189
+
190
+ ### Data Splits
191
+
192
+ | | Train | Dev | Test |
193
+ |--------|-------|------|------|
194
+ | fabner | 9435 | 2183 | 2064 |
195
+
196
+ ## Dataset Creation
197
+
198
+ ### Curation Rationale
199
+
200
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
+
202
+ ### Source Data
203
+
204
+ #### Initial Data Collection and Normalization
205
+
206
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
+
208
+ #### Who are the source language producers?
209
+
210
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
+
212
+ ### Annotations
213
+
214
+ #### Annotation process
215
+
216
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
217
+
218
+ #### Who are the annotators?
219
+
220
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
+
222
+ ### Personal and Sensitive Information
223
+
224
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
+
226
+ ## Considerations for Using the Data
227
+
228
+ ### Social Impact of Dataset
229
+
230
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
231
+
232
+ ### Discussion of Biases
233
+
234
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
235
+
236
+ ### Other Known Limitations
237
+
238
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
+
240
+ ## Additional Information
241
+
242
+ ### Dataset Curators
243
+
244
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
245
+
246
+ ### Licensing Information
247
+
248
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
249
+
250
+ ### Citation Information
251
+
252
+ ```
253
+ @article{DBLP:journals/jim/KumarS22,
254
+ author = {Aman Kumar and
255
+ Binil Starly},
256
+ title = {"FabNER": information extraction from manufacturing process science
257
+ domain literature using named entity recognition},
258
+ journal = {J. Intell. Manuf.},
259
+ volume = {33},
260
+ number = {8},
261
+ pages = {2393--2407},
262
+ year = {2022},
263
+ url = {https://doi.org/10.1007/s10845-021-01807-x},
264
+ doi = {10.1007/s10845-021-01807-x},
265
+ timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
266
+ biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
267
+ bibsource = {dblp computer science bibliography, https://dblp.org}
268
+ }
269
+ ```
270
+
271
+ ### Contributions
272
+
273
+ Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
fabner.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition."""
15
+
16
+ import datasets
17
+
18
+
19
+ # Find for instance the citation on arxiv or on the dataset repo/website
20
+ _CITATION = """\
21
+ @article{DBLP:journals/jim/KumarS22,
22
+ author = {Aman Kumar and
23
+ Binil Starly},
24
+ title = {"FabNER": information extraction from manufacturing process science
25
+ domain literature using named entity recognition},
26
+ journal = {J. Intell. Manuf.},
27
+ volume = {33},
28
+ number = {8},
29
+ pages = {2393--2407},
30
+ year = {2022},
31
+ url = {https://doi.org/10.1007/s10845-021-01807-x},
32
+ doi = {10.1007/s10845-021-01807-x},
33
+ timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
34
+ biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
35
+ bibsource = {dblp computer science bibliography, https://dblp.org}
36
+ }
37
+ """
38
+
39
+ # You can copy an official description
40
+ _DESCRIPTION = """\
41
+ FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
42
+ It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
43
+ science research.
44
+ For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
45
+ Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
46
+ Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
47
+ BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
48
+ B=Beginning, I-Intermediate, O=Outside, E=End, S=Single.
49
+ """
50
+
51
+ _HOMEPAGE = "https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407"
52
+
53
+ # TODO: Add the licence for the dataset here if you can find it
54
+ _LICENSE = ""
55
+
56
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
57
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
58
+ _URLS = {
59
+ "train": "https://figshare.com/ndownloader/files/28405854/S2-train.txt",
60
+ "validation": "https://figshare.com/ndownloader/files/28405857/S3-val.txt",
61
+ "test": "https://figshare.com/ndownloader/files/28405851/S1-test.txt",
62
+ }
63
+
64
+ class FabNER(datasets.GeneratorBasedBuilder):
65
+ """FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition."""
66
+
67
+ VERSION = datasets.Version("1.1.0")
68
+
69
+ # This is an example of a dataset with multiple configurations.
70
+ # If you don't want/need to define several sub-sets in your dataset,
71
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
72
+
73
+ # If you need to make complex sub-parts in the datasets with configurable options
74
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
75
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
76
+
77
+ # You will be able to load one or the other configurations in the following list with
78
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
79
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
80
+ BUILDER_CONFIGS = [
81
+ datasets.BuilderConfig(name="fabner", version=VERSION, description="The FabNER dataset"),
82
+ ]
83
+
84
+ def _info(self):
85
+ features = datasets.Features(
86
+ {
87
+ "id": datasets.Value("string"),
88
+ "tokens": datasets.Sequence(datasets.Value("string")),
89
+ "ner_tags": datasets.Sequence(
90
+ datasets.features.ClassLabel(
91
+ names=[
92
+ "O",
93
+ "B-MATE", # Material
94
+ "I-MATE",
95
+ "O-MATE",
96
+ "E-MATE",
97
+ "S-MATE",
98
+ "B-MANP", # Manufacturing Process
99
+ "I-MANP",
100
+ "O-MANP",
101
+ "E-MANP",
102
+ "S-MANP",
103
+ "B-MACEQ", # Machine/Equipment
104
+ "I-MACEQ",
105
+ "O-MACEQ",
106
+ "E-MACEQ",
107
+ "S-MACEQ",
108
+ "B-APPL", # Application
109
+ "I-APPL",
110
+ "O-APPL",
111
+ "E-APPL",
112
+ "S-APPL",
113
+ "B-FEAT", # Engineering Features
114
+ "I-FEAT",
115
+ "O-FEAT",
116
+ "E-FEAT",
117
+ "S-FEAT",
118
+ "B-PRO", # Mechanical Properties
119
+ "I-PRO",
120
+ "O-PRO",
121
+ "E-PRO",
122
+ "S-PRO",
123
+ "B-CHAR", # Process Characterization
124
+ "I-CHAR",
125
+ "O-CHAR",
126
+ "E-CHAR",
127
+ "S-CHAR",
128
+ "B-PARA", # Process Parameters
129
+ "I-PARA",
130
+ "O-PARA",
131
+ "E-PARA",
132
+ "S-PARA",
133
+ "B-ENAT", # Enabling Technology
134
+ "I-ENAT",
135
+ "O-ENAT",
136
+ "E-ENAT",
137
+ "S-ENAT",
138
+ "B-CONPRI", # Concept/Principles
139
+ "I-CONPRI",
140
+ "O-CONPRI",
141
+ "E-CONPRI",
142
+ "S-CONPRI",
143
+ "B-MANS", # Manufacturing Standards
144
+ "I-MANS",
145
+ "O-MANS",
146
+ "E-MANS",
147
+ "S-MANS",
148
+ "B-BIOP", # BioMedical
149
+ "I-BIOP",
150
+ "O-BIOP",
151
+ "E-BIOP",
152
+ "S-BIOP",
153
+ ]
154
+ )
155
+ ),
156
+ }
157
+ )
158
+ return datasets.DatasetInfo(
159
+ # This is the description that will appear on the datasets page.
160
+ description=_DESCRIPTION,
161
+ # This defines the different columns of the dataset and their types
162
+ features=features, # Here we define them above because they are different between the two configurations
163
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
164
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
165
+ # supervised_keys=("sentence", "label"),
166
+ # Homepage of the dataset for documentation
167
+ homepage=_HOMEPAGE,
168
+ # License for the dataset if available
169
+ license=_LICENSE,
170
+ # Citation for the dataset
171
+ citation=_CITATION,
172
+ )
173
+
174
+ def _split_generators(self, dl_manager):
175
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
176
+
177
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
178
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
179
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
180
+ downloaded_files = dl_manager.download_and_extract(_URLS)
181
+
182
+ return [datasets.SplitGenerator(name=i, gen_kwargs={"filepath": downloaded_files[str(i)]})
183
+ for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
184
+
185
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
186
+ def _generate_examples(self, filepath):
187
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
188
+ with open(filepath, encoding="utf-8") as f:
189
+ guid = 0
190
+ tokens = []
191
+ ner_tags = []
192
+ for line in f:
193
+ if line == "" or line == "\n":
194
+ if tokens:
195
+ yield guid, {
196
+ "id": str(guid),
197
+ "tokens": tokens,
198
+ "ner_tags": ner_tags,
199
+ }
200
+ guid += 1
201
+ tokens = []
202
+ ner_tags = []
203
+ else:
204
+ splits = line.split(" ")
205
+ tokens.append(splits[0])
206
+ ner_tags.append(splits[1].rstrip())
207
+ # last example
208
+ if tokens:
209
+ yield guid, {
210
+ "id": str(guid),
211
+ "tokens": tokens,
212
+ "ner_tags": ner_tags,
213
+ }