jensjorisdecorte commited on
Commit
9cbaea1
·
verified ·
1 Parent(s): 171620d

Add new SentenceTransformer model.

Browse files
1_SmartTokenPooling/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"word_embedding_dimension": 768, "window_size": -1}
1_SmartTokenPooling/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bbcbf73561f6bc5d0a17ea6a2081feed2d1304e87602d8c502d9a5c4bd85576
3
+ size 16
README.md ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/all-mpnet-base-v2
3
+ library_name: sentence-transformers
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:152151
11
+ - loss:HardMultipleNegativesRankingLoss
12
+ - loss:CachedMultipleNegativesSymmetricRankingLoss
13
+ widget:
14
+ - source_sentence: Use arc welding techniques to make welds in conditions of very
15
+ high pressure, usually in an underwater dry chamber such as a diving bell. Compensate
16
+ for the negative consequences of high pressure on a weld, such as the shorter
17
+ and less steady welding arc.
18
+ sentences:
19
+ - skill_skill
20
+ - weld in hyperbaric conditions
21
+ - human-robot collaboration
22
+ - source_sentence: Carry out mineral processing operations, which aim to separate
23
+ valuable minerals from waste rock or grout. Oversee and implement processes such
24
+ as samping, analysis and most importantly the electrostatic separation process,
25
+ which separates valuable materials from mineral ore.
26
+ sentences:
27
+ - internet governance
28
+ - implement mineral processes
29
+ - skill_skill
30
+ - source_sentence: looking for a pest control technician with strong knowledge in
31
+ preventative measures to minimize pest populations A successful candidate will
32
+ have experience in cryopreservation techniques as well as laboratory protocols
33
+ sentences:
34
+ - cryopreservation
35
+ - food preservation
36
+ - skill_sentence
37
+ - source_sentence: Candidates with experience using popular balance sheet software
38
+ are encouraged to apply for our accounting position. We are looking for a cargo
39
+ handling expert who can maximize efficiency on our shipping vessels.
40
+ sentences:
41
+ - skill_sentence
42
+ - perform balance sheet operations
43
+ - promote inclusion
44
+ - source_sentence: Must have the ability to read and interpret schematics and effectively
45
+ install and calibrate lift governors to ensure compliance with safety standards.
46
+ The ideal candidate must have an ear for identifying music with commercial potential
47
+ and understand the current market trends.
48
+ sentences:
49
+ - prepare credit reports
50
+ - install lift governor
51
+ - skill_sentence
52
+ ---
53
+
54
+ # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
55
+
56
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the skill_sentence and skill_skill datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
57
+
58
+ ## Model Details
59
+
60
+ ### Model Description
61
+ - **Model Type:** Sentence Transformer
62
+ - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
63
+ - **Maximum Sequence Length:** 96 tokens
64
+ - **Output Dimensionality:** 768 tokens
65
+ - **Similarity Function:** Cosine Similarity
66
+ - **Training Datasets:**
67
+ - skill_sentence
68
+ - skill_skill
69
+ <!-- - **Language:** Unknown -->
70
+ <!-- - **License:** Unknown -->
71
+
72
+ ### Model Sources
73
+
74
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
75
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
76
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
77
+
78
+ ### Full Model Architecture
79
+
80
+ ```
81
+ SentenceTransformer(
82
+ (0): Transformer({'max_seq_length': 96, 'do_lower_case': False}) with Transformer model: MPNetModel
83
+ (1): SmartTokenPooling({'word_embedding_dimension': 768, 'window_size': -1})
84
+ )
85
+ ```
86
+
87
+ ## Usage
88
+
89
+ ### Direct Usage (Sentence Transformers)
90
+
91
+ First install the Sentence Transformers library:
92
+
93
+ ```bash
94
+ pip install -U sentence-transformers
95
+ ```
96
+
97
+ Then you can load this model and run inference.
98
+ ```python
99
+ from sentence_transformers import SentenceTransformer
100
+
101
+ # Download from the 🤗 Hub
102
+ model = SentenceTransformer("jensjorisdecorte/ConTeXT-Skill-Extraction-base")
103
+ # Run inference
104
+ sentences = [
105
+ 'Must have the ability to read and interpret schematics and effectively install and calibrate lift governors to ensure compliance with safety standards. The ideal candidate must have an ear for identifying music with commercial potential and understand the current market trends.',
106
+ 'install lift governor',
107
+ 'skill_sentence',
108
+ ]
109
+ embeddings = model.encode(sentences)
110
+ print(embeddings.shape)
111
+ # [3, 768]
112
+
113
+ # Get the similarity scores for the embeddings
114
+ similarities = model.similarity(embeddings, embeddings)
115
+ print(similarities.shape)
116
+ # [3, 3]
117
+ ```
118
+
119
+ <!--
120
+ ### Direct Usage (Transformers)
121
+
122
+ <details><summary>Click to see the direct usage in Transformers</summary>
123
+
124
+ </details>
125
+ -->
126
+
127
+ <!--
128
+ ### Downstream Usage (Sentence Transformers)
129
+
130
+ You can finetune this model on your own dataset.
131
+
132
+ <details><summary>Click to expand</summary>
133
+
134
+ </details>
135
+ -->
136
+
137
+ <!--
138
+ ### Out-of-Scope Use
139
+
140
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
141
+ -->
142
+
143
+ <!--
144
+ ## Bias, Risks and Limitations
145
+
146
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
147
+ -->
148
+
149
+ <!--
150
+ ### Recommendations
151
+
152
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
153
+ -->
154
+
155
+ ## Training Details
156
+
157
+ ### Training Datasets
158
+
159
+ #### skill_sentence
160
+
161
+ * Dataset: skill_sentence
162
+ * Size: 138,260 training samples
163
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>type</code>
164
+ * Approximate statistics based on the first 1000 samples:
165
+ | | anchor | positive | type |
166
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
167
+ | type | string | string | string |
168
+ | details | <ul><li>min: 9 tokens</li><li>mean: 35.67 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 6.12 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 5.0 tokens</li><li>max: 5 tokens</li></ul> |
169
+ * Samples:
170
+ | anchor | positive | type |
171
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|:----------------------------|
172
+ | <code>duties for this role will include conducting water chemistry analysis and managing the laboratory. seeking a seasoned print manufacturing manager with knowledge of printing materials, processes and equipment.</code> | <code>water chemistry analysis</code> | <code>skill_sentence</code> |
173
+ | <code>divers must understand how to calculate dive times and limits to ensure they return safely. We are searching for a multimedia software expert with experience in sound, lighting and recording software.</code> | <code>comply with the planned time for the depth of the dive</code> | <code>skill_sentence</code> |
174
+ | <code>A successful candidate will possess the ability to calibrate laboratory equipment according to industry standards. we are seeking a candidate with experience in preparing government funding dossiers</code> | <code>prepare government funding dossiers</code> | <code>skill_sentence</code> |
175
+ * Loss: <code>custom_losses.HardMultipleNegativesRankingLoss</code> with these parameters:
176
+ ```json
177
+ {
178
+ "scale": 20,
179
+ "similarity_fct": "<lambda>"
180
+ }
181
+ ```
182
+
183
+ #### skill_skill
184
+
185
+ * Dataset: skill_skill
186
+ * Size: 13,891 training samples
187
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>type</code>
188
+ * Approximate statistics based on the first 1000 samples:
189
+ | | anchor | positive | type |
190
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
191
+ | type | string | string | string |
192
+ | details | <ul><li>min: 6 tokens</li><li>mean: 29.09 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 6.24 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 5.0 tokens</li><li>max: 5 tokens</li></ul> |
193
+ * Samples:
194
+ | anchor | positive | type |
195
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------|:-------------------------|
196
+ | <code>Adapt and move set pieces during rehearsals and live performances.</code> | <code>adapt sets</code> | <code>skill_skill</code> |
197
+ | <code>Prepare bread and bread products such as sandwiches for consumption.</code> | <code>prepare bread products</code> | <code>skill_skill</code> |
198
+ | <code>The strategies, methods and techniques that increase the organisation's capacity to protect and sustain the services and operations that fulfil the organisational mission and create lasting values by effectively addressing the combined issues of security, preparedness, risk and disaster recovery.</code> | <code>organisational resilience</code> | <code>skill_skill</code> |
199
+ * Loss: [<code>CachedMultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativessymmetricrankingloss) with these parameters:
200
+ ```json
201
+ {
202
+ "scale": 20.0,
203
+ "similarity_fct": "cos_sim",
204
+ "mini_batch_size": 64
205
+ }
206
+ ```
207
+
208
+ ### Training Hyperparameters
209
+ #### Non-Default Hyperparameters
210
+
211
+ - `overwrite_output_dir`: True
212
+ - `eval_strategy`: steps
213
+ - `per_device_train_batch_size`: 4096
214
+ - `per_device_eval_batch_size`: 4096
215
+ - `num_train_epochs`: 1
216
+ - `warmup_ratio`: 0.1
217
+ - `fp16`: True
218
+ - `load_best_model_at_end`: True
219
+
220
+ #### All Hyperparameters
221
+ <details><summary>Click to expand</summary>
222
+
223
+ - `overwrite_output_dir`: True
224
+ - `do_predict`: False
225
+ - `eval_strategy`: steps
226
+ - `prediction_loss_only`: True
227
+ - `per_device_train_batch_size`: 4096
228
+ - `per_device_eval_batch_size`: 4096
229
+ - `per_gpu_train_batch_size`: None
230
+ - `per_gpu_eval_batch_size`: None
231
+ - `gradient_accumulation_steps`: 1
232
+ - `eval_accumulation_steps`: None
233
+ - `torch_empty_cache_steps`: None
234
+ - `learning_rate`: 5e-05
235
+ - `weight_decay`: 0.0
236
+ - `adam_beta1`: 0.9
237
+ - `adam_beta2`: 0.999
238
+ - `adam_epsilon`: 1e-08
239
+ - `max_grad_norm`: 1.0
240
+ - `num_train_epochs`: 1
241
+ - `max_steps`: -1
242
+ - `lr_scheduler_type`: linear
243
+ - `lr_scheduler_kwargs`: {}
244
+ - `warmup_ratio`: 0.1
245
+ - `warmup_steps`: 0
246
+ - `log_level`: passive
247
+ - `log_level_replica`: warning
248
+ - `log_on_each_node`: True
249
+ - `logging_nan_inf_filter`: True
250
+ - `save_safetensors`: True
251
+ - `save_on_each_node`: False
252
+ - `save_only_model`: False
253
+ - `restore_callback_states_from_checkpoint`: False
254
+ - `no_cuda`: False
255
+ - `use_cpu`: False
256
+ - `use_mps_device`: False
257
+ - `seed`: 42
258
+ - `data_seed`: None
259
+ - `jit_mode_eval`: False
260
+ - `use_ipex`: False
261
+ - `bf16`: False
262
+ - `fp16`: True
263
+ - `fp16_opt_level`: O1
264
+ - `half_precision_backend`: auto
265
+ - `bf16_full_eval`: False
266
+ - `fp16_full_eval`: False
267
+ - `tf32`: None
268
+ - `local_rank`: 0
269
+ - `ddp_backend`: None
270
+ - `tpu_num_cores`: None
271
+ - `tpu_metrics_debug`: False
272
+ - `debug`: []
273
+ - `dataloader_drop_last`: False
274
+ - `dataloader_num_workers`: 0
275
+ - `dataloader_prefetch_factor`: None
276
+ - `past_index`: -1
277
+ - `disable_tqdm`: False
278
+ - `remove_unused_columns`: True
279
+ - `label_names`: None
280
+ - `load_best_model_at_end`: True
281
+ - `ignore_data_skip`: False
282
+ - `fsdp`: []
283
+ - `fsdp_min_num_params`: 0
284
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
285
+ - `fsdp_transformer_layer_cls_to_wrap`: None
286
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
287
+ - `deepspeed`: None
288
+ - `label_smoothing_factor`: 0.0
289
+ - `optim`: adamw_torch
290
+ - `optim_args`: None
291
+ - `adafactor`: False
292
+ - `group_by_length`: False
293
+ - `length_column_name`: length
294
+ - `ddp_find_unused_parameters`: None
295
+ - `ddp_bucket_cap_mb`: None
296
+ - `ddp_broadcast_buffers`: False
297
+ - `dataloader_pin_memory`: True
298
+ - `dataloader_persistent_workers`: False
299
+ - `skip_memory_metrics`: True
300
+ - `use_legacy_prediction_loop`: False
301
+ - `push_to_hub`: False
302
+ - `resume_from_checkpoint`: None
303
+ - `hub_model_id`: None
304
+ - `hub_strategy`: every_save
305
+ - `hub_private_repo`: False
306
+ - `hub_always_push`: False
307
+ - `gradient_checkpointing`: False
308
+ - `gradient_checkpointing_kwargs`: None
309
+ - `include_inputs_for_metrics`: False
310
+ - `eval_do_concat_batches`: True
311
+ - `fp16_backend`: auto
312
+ - `push_to_hub_model_id`: None
313
+ - `push_to_hub_organization`: None
314
+ - `mp_parameters`:
315
+ - `auto_find_batch_size`: False
316
+ - `full_determinism`: False
317
+ - `torchdynamo`: None
318
+ - `ray_scope`: last
319
+ - `ddp_timeout`: 1800
320
+ - `torch_compile`: False
321
+ - `torch_compile_backend`: None
322
+ - `torch_compile_mode`: None
323
+ - `dispatch_batches`: None
324
+ - `split_batches`: None
325
+ - `include_tokens_per_second`: False
326
+ - `include_num_input_tokens_seen`: False
327
+ - `neftune_noise_alpha`: None
328
+ - `optim_target_modules`: None
329
+ - `batch_eval_metrics`: False
330
+ - `eval_on_start`: False
331
+ - `eval_use_gather_object`: False
332
+ - `batch_sampler`: batch_sampler
333
+ - `multi_dataset_batch_sampler`: proportional
334
+
335
+ </details>
336
+
337
+ ### Training Logs
338
+ | Epoch | Step |
339
+ |:----------:|:------:|
340
+ | 0.1053 | 4 |
341
+ | 0.2105 | 8 |
342
+ | 0.3158 | 12 |
343
+ | 0.4211 | 16 |
344
+ | 0.5263 | 20 |
345
+ | 0.6316 | 24 |
346
+ | **0.7368** | **28** |
347
+ | 0.8421 | 32 |
348
+ | 0.9474 | 36 |
349
+
350
+ * The bold row denotes the saved checkpoint.
351
+
352
+ ### Framework Versions
353
+ - Python: 3.9.19
354
+ - Sentence Transformers: 3.1.0
355
+ - Transformers: 4.44.2
356
+ - PyTorch: 2.4.1+cu118
357
+ - Accelerate: 0.34.2
358
+ - Datasets: 3.0.0
359
+ - Tokenizers: 0.19.1
360
+
361
+ ## Citation
362
+
363
+ ### BibTeX
364
+
365
+ #### Sentence Transformers
366
+ ```bibtex
367
+ @inproceedings{reimers-2019-sentence-bert,
368
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
369
+ author = "Reimers, Nils and Gurevych, Iryna",
370
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
371
+ month = "11",
372
+ year = "2019",
373
+ publisher = "Association for Computational Linguistics",
374
+ url = "https://arxiv.org/abs/1908.10084",
375
+ }
376
+ ```
377
+
378
+ <!--
379
+ ## Glossary
380
+
381
+ *Clearly define terms in order to be accessible across audiences.*
382
+ -->
383
+
384
+ <!--
385
+ ## Model Card Authors
386
+
387
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
388
+ -->
389
+
390
+ <!--
391
+ ## Model Card Contact
392
+
393
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
394
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "final-ablation-output/D-desc_scale=20_lr=5e-05_batch_size=4096_symmetric_loss=True_learn_ontology_0",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.44.2",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.0",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.4.1+cu118"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2689286444fec6465c1209d401b14196314b1130420bc00b9e9c96f70d52dc8
3
+ size 437967672
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SmartTokenPooling",
12
+ "type": "sbert_patch.SmartTokenPooling.SmartTokenPooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": true,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "mask_token": "<mask>",
58
+ "max_length": 128,
59
+ "max_seq_length": 96,
60
+ "model_max_length": 96,
61
+ "pad_to_multiple_of": null,
62
+ "pad_token": "<pad>",
63
+ "pad_token_type_id": 0,
64
+ "padding_side": "right",
65
+ "sep_token": "</s>",
66
+ "stride": 0,
67
+ "strip_accents": null,
68
+ "tokenize_chinese_chars": true,
69
+ "tokenizer_class": "MPNetTokenizer",
70
+ "truncation_side": "right",
71
+ "truncation_strategy": "longest_first",
72
+ "unk_token": "[UNK]"
73
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff