pEpOo commited on
Commit
99b5570
1 Parent(s): caf81c5

Add SetFit model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - accuracy
10
+ widget:
11
+ - text: '@SunderCR two hours of Sandstorm remixes. All merged together. No between-song
12
+ silence.'
13
+ - text: Discovered Plane Debris Is From Missing Malaysia Airlines Flight 370 | TIME
14
+ http://t.co/7fSn1GeWUX
15
+ - text: '#?? #???? #??? #??? MH370: Aircraft debris found on La Reunion is from missing
16
+ Malaysia Airlines ... http://t.co/oTsM38XMas'
17
+ - text: 'Today your life could change forever - #Chronicillness can''t be avoided
18
+ - It can be survived
19
+
20
+
21
+ Join #MyLifeStory >>> http://t.co/FYJWjDkM5I'
22
+ - text: SHOUOUT TO @kasad1lla CAUSE HER VOCALS ARE BLAZING HOT LIKE THE WEATHER SHES
23
+ IN
24
+ pipeline_tag: text-classification
25
+ inference: true
26
+ base_model: sentence-transformers/all-mpnet-base-v2
27
+ model-index:
28
+ - name: SetFit with sentence-transformers/all-mpnet-base-v2
29
+ results:
30
+ - task:
31
+ type: text-classification
32
+ name: Text Classification
33
+ dataset:
34
+ name: Unknown
35
+ type: unknown
36
+ split: test
37
+ metrics:
38
+ - type: accuracy
39
+ value: 0.8058161350844277
40
+ name: Accuracy
41
+ ---
42
+
43
+ # SetFit with sentence-transformers/all-mpnet-base-v2
44
+
45
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
46
+
47
+ The model has been trained using an efficient few-shot learning technique that involves:
48
+
49
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
50
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
51
+
52
+ ## Model Details
53
+
54
+ ### Model Description
55
+ - **Model Type:** SetFit
56
+ - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
57
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
58
+ - **Maximum Sequence Length:** 384 tokens
59
+ - **Number of Classes:** 2 classes
60
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
61
+ <!-- - **Language:** Unknown -->
62
+ <!-- - **License:** Unknown -->
63
+
64
+ ### Model Sources
65
+
66
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
67
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
68
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
69
+
70
+ ### Model Labels
71
+ | Label | Examples |
72
+ |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
73
+ | 0 | <ul><li>'To fight bioterrorism sir.'</li><li>'85V-265V 10W LED Warm White Light Motion Sensor Outdoor Flood Light PIR Lamp AUC http://t.co/NJVPXzMj5V http://t.co/Ijd7WzV5t9'</li><li>'Photo: referencereference: xekstrin: I THOUGHT THE NOSTRILS WERE EYES AND I ALMOST CRIED FROM FEAR partake... http://t.co/O7yYjLuKfJ'</li></ul> |
74
+ | 1 | <ul><li>'Police officer wounded suspect dead after exchanging shots: RICHMOND Va. (AP) \x89ÛÓ A Richmond police officer wa... http://t.co/Y0qQS2L7bS'</li><li>"There's a weird siren going off here...I hope Hunterston isn't in the process of blowing itself to smithereens..."</li><li>'Iranian warship points weapon at American helicopter... http://t.co/cgFZk8Ha1R'</li></ul> |
75
+
76
+ ## Evaluation
77
+
78
+ ### Metrics
79
+ | Label | Accuracy |
80
+ |:--------|:---------|
81
+ | **all** | 0.8058 |
82
+
83
+ ## Uses
84
+
85
+ ### Direct Use for Inference
86
+
87
+ First install the SetFit library:
88
+
89
+ ```bash
90
+ pip install setfit
91
+ ```
92
+
93
+ Then you can load this model and run inference.
94
+
95
+ ```python
96
+ from setfit import SetFitModel
97
+
98
+ # Download from the 🤗 Hub
99
+ model = SetFitModel.from_pretrained("pEpOo/catastrophy6")
100
+ # Run inference
101
+ preds = model("SHOUOUT TO @kasad1lla CAUSE HER VOCALS ARE BLAZING HOT LIKE THE WEATHER SHES IN")
102
+ ```
103
+
104
+ <!--
105
+ ### Downstream Use
106
+
107
+ *List how someone could finetune this model on their own dataset.*
108
+ -->
109
+
110
+ <!--
111
+ ### Out-of-Scope Use
112
+
113
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
114
+ -->
115
+
116
+ <!--
117
+ ## Bias, Risks and Limitations
118
+
119
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
120
+ -->
121
+
122
+ <!--
123
+ ### Recommendations
124
+
125
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
126
+ -->
127
+
128
+ ## Training Details
129
+
130
+ ### Training Set Metrics
131
+ | Training set | Min | Median | Max |
132
+ |:-------------|:----|:--------|:----|
133
+ | Word count | 1 | 14.7175 | 54 |
134
+
135
+ | Label | Training Sample Count |
136
+ |:------|:----------------------|
137
+ | 0 | 1335 |
138
+ | 1 | 948 |
139
+
140
+ ### Training Hyperparameters
141
+ - batch_size: (16, 16)
142
+ - num_epochs: (1, 1)
143
+ - max_steps: -1
144
+ - sampling_strategy: oversampling
145
+ - num_iterations: 20
146
+ - body_learning_rate: (2e-05, 2e-05)
147
+ - head_learning_rate: 2e-05
148
+ - loss: CosineSimilarityLoss
149
+ - distance_metric: cosine_distance
150
+ - margin: 0.25
151
+ - end_to_end: False
152
+ - use_amp: False
153
+ - warmup_proportion: 0.1
154
+ - seed: 42
155
+ - eval_max_steps: -1
156
+ - load_best_model_at_end: False
157
+
158
+ ### Training Results
159
+ | Epoch | Step | Training Loss | Validation Loss |
160
+ |:------:|:----:|:-------------:|:---------------:|
161
+ | 0.0094 | 1 | 0.0044 | - |
162
+ | 0.4717 | 50 | 0.005 | - |
163
+ | 0.9434 | 100 | 0.0007 | - |
164
+ | 0.0002 | 1 | 0.4675 | - |
165
+ | 0.0088 | 50 | 0.3358 | - |
166
+ | 0.0175 | 100 | 0.2516 | - |
167
+ | 0.0263 | 150 | 0.2158 | - |
168
+ | 0.0350 | 200 | 0.1924 | - |
169
+ | 0.0438 | 250 | 0.1907 | - |
170
+ | 0.0526 | 300 | 0.2166 | - |
171
+ | 0.0613 | 350 | 0.2243 | - |
172
+ | 0.0701 | 400 | 0.0644 | - |
173
+ | 0.0788 | 450 | 0.1924 | - |
174
+ | 0.0876 | 500 | 0.166 | - |
175
+ | 0.0964 | 550 | 0.2117 | - |
176
+ | 0.1051 | 600 | 0.0793 | - |
177
+ | 0.1139 | 650 | 0.0808 | - |
178
+ | 0.1226 | 700 | 0.1183 | - |
179
+ | 0.1314 | 750 | 0.0808 | - |
180
+ | 0.1402 | 800 | 0.0194 | - |
181
+ | 0.1489 | 850 | 0.0699 | - |
182
+ | 0.1577 | 900 | 0.0042 | - |
183
+ | 0.1664 | 950 | 0.0048 | - |
184
+ | 0.1752 | 1000 | 0.1886 | - |
185
+ | 0.1840 | 1050 | 0.0008 | - |
186
+ | 0.1927 | 1100 | 0.0033 | - |
187
+ | 0.2015 | 1150 | 0.0361 | - |
188
+ | 0.2102 | 1200 | 0.12 | - |
189
+ | 0.2190 | 1250 | 0.0035 | - |
190
+ | 0.2278 | 1300 | 0.0002 | - |
191
+ | 0.2365 | 1350 | 0.0479 | - |
192
+ | 0.2453 | 1400 | 0.0568 | - |
193
+ | 0.2540 | 1450 | 0.0004 | - |
194
+ | 0.2628 | 1500 | 0.0002 | - |
195
+ | 0.2715 | 1550 | 0.0013 | - |
196
+ | 0.2803 | 1600 | 0.0005 | - |
197
+ | 0.2891 | 1650 | 0.0014 | - |
198
+ | 0.2978 | 1700 | 0.0004 | - |
199
+ | 0.3066 | 1750 | 0.0008 | - |
200
+ | 0.3153 | 1800 | 0.0616 | - |
201
+ | 0.3241 | 1850 | 0.0003 | - |
202
+ | 0.3329 | 1900 | 0.001 | - |
203
+ | 0.3416 | 1950 | 0.0581 | - |
204
+ | 0.3504 | 2000 | 0.0657 | - |
205
+ | 0.3591 | 2050 | 0.0584 | - |
206
+ | 0.3679 | 2100 | 0.0339 | - |
207
+ | 0.3767 | 2150 | 0.0081 | - |
208
+ | 0.3854 | 2200 | 0.0001 | - |
209
+ | 0.3942 | 2250 | 0.0009 | - |
210
+ | 0.4029 | 2300 | 0.0018 | - |
211
+ | 0.4117 | 2350 | 0.0001 | - |
212
+ | 0.4205 | 2400 | 0.0012 | - |
213
+ | 0.4292 | 2450 | 0.0001 | - |
214
+ | 0.4380 | 2500 | 0.0003 | - |
215
+ | 0.4467 | 2550 | 0.0035 | - |
216
+ | 0.4555 | 2600 | 0.0172 | - |
217
+ | 0.4643 | 2650 | 0.0383 | - |
218
+ | 0.4730 | 2700 | 0.0222 | - |
219
+ | 0.4818 | 2750 | 0.0013 | - |
220
+ | 0.4905 | 2800 | 0.0007 | - |
221
+ | 0.4993 | 2850 | 0.0003 | - |
222
+ | 0.5081 | 2900 | 0.1247 | - |
223
+ | 0.5168 | 2950 | 0.023 | - |
224
+ | 0.5256 | 3000 | 0.0002 | - |
225
+ | 0.5343 | 3050 | 0.0002 | - |
226
+ | 0.5431 | 3100 | 0.0666 | - |
227
+ | 0.5519 | 3150 | 0.0002 | - |
228
+ | 0.5606 | 3200 | 0.0003 | - |
229
+ | 0.5694 | 3250 | 0.0012 | - |
230
+ | 0.5781 | 3300 | 0.0085 | - |
231
+ | 0.5869 | 3350 | 0.0003 | - |
232
+ | 0.5957 | 3400 | 0.0002 | - |
233
+ | 0.6044 | 3450 | 0.0004 | - |
234
+ | 0.6132 | 3500 | 0.013 | - |
235
+ | 0.6219 | 3550 | 0.0089 | - |
236
+ | 0.6307 | 3600 | 0.0001 | - |
237
+ | 0.6395 | 3650 | 0.0002 | - |
238
+ | 0.6482 | 3700 | 0.0039 | - |
239
+ | 0.6570 | 3750 | 0.0031 | - |
240
+ | 0.6657 | 3800 | 0.0009 | - |
241
+ | 0.6745 | 3850 | 0.0002 | - |
242
+ | 0.6833 | 3900 | 0.0002 | - |
243
+ | 0.6920 | 3950 | 0.0001 | - |
244
+ | 0.7008 | 4000 | 0.0 | - |
245
+ | 0.7095 | 4050 | 0.0212 | - |
246
+ | 0.7183 | 4100 | 0.0001 | - |
247
+ | 0.7270 | 4150 | 0.0586 | - |
248
+ | 0.7358 | 4200 | 0.0001 | - |
249
+ | 0.7446 | 4250 | 0.0003 | - |
250
+ | 0.7533 | 4300 | 0.0126 | - |
251
+ | 0.7621 | 4350 | 0.0001 | - |
252
+ | 0.7708 | 4400 | 0.0001 | - |
253
+ | 0.7796 | 4450 | 0.0001 | - |
254
+ | 0.7884 | 4500 | 0.0 | - |
255
+ | 0.7971 | 4550 | 0.0002 | - |
256
+ | 0.8059 | 4600 | 0.0002 | - |
257
+ | 0.8146 | 4650 | 0.0001 | - |
258
+ | 0.8234 | 4700 | 0.0035 | - |
259
+ | 0.8322 | 4750 | 0.0002 | - |
260
+ | 0.8409 | 4800 | 0.0002 | - |
261
+ | 0.8497 | 4850 | 0.0001 | - |
262
+ | 0.8584 | 4900 | 0.0001 | - |
263
+ | 0.8672 | 4950 | 0.0001 | - |
264
+ | 0.8760 | 5000 | 0.0003 | - |
265
+ | 0.8847 | 5050 | 0.0 | - |
266
+ | 0.8935 | 5100 | 0.0041 | - |
267
+ | 0.9022 | 5150 | 0.0001 | - |
268
+ | 0.9110 | 5200 | 0.0001 | - |
269
+ | 0.9198 | 5250 | 0.0001 | - |
270
+ | 0.9285 | 5300 | 0.0001 | - |
271
+ | 0.9373 | 5350 | 0.0001 | - |
272
+ | 0.9460 | 5400 | 0.0001 | - |
273
+ | 0.9548 | 5450 | 0.0001 | - |
274
+ | 0.9636 | 5500 | 0.0001 | - |
275
+ | 0.9723 | 5550 | 0.0001 | - |
276
+ | 0.9811 | 5600 | 0.0002 | - |
277
+ | 0.9898 | 5650 | 0.0271 | - |
278
+ | 0.9986 | 5700 | 0.0 | - |
279
+
280
+ ### Framework Versions
281
+ - Python: 3.10.12
282
+ - SetFit: 1.0.1
283
+ - Sentence Transformers: 2.2.2
284
+ - Transformers: 4.35.2
285
+ - PyTorch: 2.1.0+cu121
286
+ - Datasets: 2.15.0
287
+ - Tokenizers: 0.15.0
288
+
289
+ ## Citation
290
+
291
+ ### BibTeX
292
+ ```bibtex
293
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
294
+ doi = {10.48550/ARXIV.2209.11055},
295
+ url = {https://arxiv.org/abs/2209.11055},
296
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
297
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
298
+ title = {Efficient Few-Shot Learning Without Prompts},
299
+ publisher = {arXiv},
300
+ year = {2022},
301
+ copyright = {Creative Commons Attribution 4.0 International}
302
+ }
303
+ ```
304
+
305
+ <!--
306
+ ## Glossary
307
+
308
+ *Clearly define terms in order to be accessible across audiences.*
309
+ -->
310
+
311
+ <!--
312
+ ## Model Card Authors
313
+
314
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
315
+ -->
316
+
317
+ <!--
318
+ ## Model Card Contact
319
+
320
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
321
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/root/.cache/torch/sentence_transformers/sentence-transformers_all-mpnet-base-v2/",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.35.2",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.6.1",
5
+ "pytorch": "1.8.1"
6
+ }
7
+ }
config_setfit.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "labels": null,
3
+ "normalize_embeddings": false
4
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:347b8715b68cfa8fc9e4d6abcd34f10081f81708862ffab1d314ed4616785010
3
+ size 437967672
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:316aee40cbd4076613e9797f0988e341187aa6dce871f27d27bfee9388f311d4
3
+ size 6991
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 384,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": true,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "mask_token": "<mask>",
58
+ "max_length": 128,
59
+ "model_max_length": 512,
60
+ "pad_to_multiple_of": null,
61
+ "pad_token": "<pad>",
62
+ "pad_token_type_id": 0,
63
+ "padding_side": "right",
64
+ "sep_token": "</s>",
65
+ "stride": 0,
66
+ "strip_accents": null,
67
+ "tokenize_chinese_chars": true,
68
+ "tokenizer_class": "MPNetTokenizer",
69
+ "truncation_side": "right",
70
+ "truncation_strategy": "longest_first",
71
+ "unk_token": "[UNK]"
72
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff