venkateshmurugadas commited on
Commit
f55a22f
1 Parent(s): 968b4b1

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,819 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: nomic-ai/nomic-embed-text-v1.5
3
+ datasets: []
4
+ language:
5
+ - en
6
+ library_name: sentence-transformers
7
+ license: apache-2.0
8
+ metrics:
9
+ - cosine_accuracy@1
10
+ - cosine_accuracy@3
11
+ - cosine_accuracy@5
12
+ - cosine_accuracy@10
13
+ - cosine_precision@1
14
+ - cosine_precision@3
15
+ - cosine_precision@5
16
+ - cosine_precision@10
17
+ - cosine_recall@1
18
+ - cosine_recall@3
19
+ - cosine_recall@5
20
+ - cosine_recall@10
21
+ - cosine_ndcg@10
22
+ - cosine_mrr@10
23
+ - cosine_map@100
24
+ pipeline_tag: sentence-similarity
25
+ tags:
26
+ - sentence-transformers
27
+ - sentence-similarity
28
+ - feature-extraction
29
+ - generated_from_trainer
30
+ - dataset_size:6300
31
+ - loss:MatryoshkaLoss
32
+ - loss:MultipleNegativesRankingLoss
33
+ widget:
34
+ - source_sentence: Chevron aims to support a diverse and inclusive supply chain that
35
+ reflects the communities where they operate, believing that a diverse supply chain
36
+ contributes to their success and growth.
37
+ sentences:
38
+ - What was the renewal rate for Costco memberships in the U.S. and Canada at the
39
+ end of 2023?
40
+ - What is Chevron's approach towards maintaining a diverse and inclusive supply
41
+ chain?
42
+ - What percentage growth did LinkedIn revenue experience?
43
+ - source_sentence: Visa Direct is part of Visa’s strategy beyond C2B payments and
44
+ helps facilitate the delivery of funds to eligible cards, deposit accounts and
45
+ digital wallets across more than 190 countries and territories. Visa Direct supports
46
+ multiple use cases, such as P2P payments and account-to-account transfers, business
47
+ and government payouts to individuals or small businesses, merchant settlements
48
+ and refunds.
49
+ sentences:
50
+ - What type of situations will the company record a liability for legal proceedings?
51
+ - What is the purpose of Visa Direct?
52
+ - What benefits does Airbnb's AirCover for guests offer?
53
+ - source_sentence: As of December 31, 2023, we had $267 million of total unrecognized
54
+ compensation cost related to nonvested stock-based compensation awards granted
55
+ under our plans.
56
+ sentences:
57
+ - How much total unrecognized compensation cost related to nonvested stock-based
58
+ compensation awards was reported as of December 31, 2023?
59
+ - What changes are planned for the company's reporting metrics starting in fiscal
60
+ year 202es and how does this affect the treatment of paused subscriptions?
61
+ - How much does HP expect to pay for benefit claims for its post-retirement benefit
62
+ plans in fiscal year 2024?
63
+ - source_sentence: Discrete tax items resulted in a (benefit) provision for income
64
+ taxes of $(18.1) million and $(11.9) million for the years ended December 31,
65
+ 2023 and 2022, respectively.
66
+ sentences:
67
+ - What was the total cost of TNT Express's business realignment through 2023?
68
+ - What is the purpose of adding research and development expenses and general and
69
+ administrative expenses to the loss from operations when calculating the contribution
70
+ margin?
71
+ - What impact did discrete tax items have on the tax provision in 2023 compared
72
+ to 2022?
73
+ - source_sentence: 'The company may issue debt or equity securities occasionally to
74
+ provide additional liquidity or pursue opportunities to enhance its long-term
75
+ competitive position while maintaining a strong balance sheet. '
76
+ sentences:
77
+ - What might the company do to increase liquidity or pursue long-term competitive
78
+ advantages while managing a strong balance sheet?
79
+ - What types of technologies does the Mortgage Technology segment employ to enhance
80
+ operational efficiency?
81
+ - Which section of a financial document covers Financial Statements and Supplementary
82
+ Data?
83
+ model-index:
84
+ - name: Nomic Embed 1.5 Financial Matryoshka
85
+ results:
86
+ - task:
87
+ type: information-retrieval
88
+ name: Information Retrieval
89
+ dataset:
90
+ name: dim 768
91
+ type: dim_768
92
+ metrics:
93
+ - type: cosine_accuracy@1
94
+ value: 0.6928571428571428
95
+ name: Cosine Accuracy@1
96
+ - type: cosine_accuracy@3
97
+ value: 0.8228571428571428
98
+ name: Cosine Accuracy@3
99
+ - type: cosine_accuracy@5
100
+ value: 0.87
101
+ name: Cosine Accuracy@5
102
+ - type: cosine_accuracy@10
103
+ value: 0.9071428571428571
104
+ name: Cosine Accuracy@10
105
+ - type: cosine_precision@1
106
+ value: 0.6928571428571428
107
+ name: Cosine Precision@1
108
+ - type: cosine_precision@3
109
+ value: 0.2742857142857143
110
+ name: Cosine Precision@3
111
+ - type: cosine_precision@5
112
+ value: 0.174
113
+ name: Cosine Precision@5
114
+ - type: cosine_precision@10
115
+ value: 0.0907142857142857
116
+ name: Cosine Precision@10
117
+ - type: cosine_recall@1
118
+ value: 0.6928571428571428
119
+ name: Cosine Recall@1
120
+ - type: cosine_recall@3
121
+ value: 0.8228571428571428
122
+ name: Cosine Recall@3
123
+ - type: cosine_recall@5
124
+ value: 0.87
125
+ name: Cosine Recall@5
126
+ - type: cosine_recall@10
127
+ value: 0.9071428571428571
128
+ name: Cosine Recall@10
129
+ - type: cosine_ndcg@10
130
+ value: 0.8029973671837228
131
+ name: Cosine Ndcg@10
132
+ - type: cosine_mrr@10
133
+ value: 0.7692715419501133
134
+ name: Cosine Mrr@10
135
+ - type: cosine_map@100
136
+ value: 0.7724352164684344
137
+ name: Cosine Map@100
138
+ - task:
139
+ type: information-retrieval
140
+ name: Information Retrieval
141
+ dataset:
142
+ name: dim 512
143
+ type: dim_512
144
+ metrics:
145
+ - type: cosine_accuracy@1
146
+ value: 0.6914285714285714
147
+ name: Cosine Accuracy@1
148
+ - type: cosine_accuracy@3
149
+ value: 0.8271428571428572
150
+ name: Cosine Accuracy@3
151
+ - type: cosine_accuracy@5
152
+ value: 0.87
153
+ name: Cosine Accuracy@5
154
+ - type: cosine_accuracy@10
155
+ value: 0.9085714285714286
156
+ name: Cosine Accuracy@10
157
+ - type: cosine_precision@1
158
+ value: 0.6914285714285714
159
+ name: Cosine Precision@1
160
+ - type: cosine_precision@3
161
+ value: 0.2757142857142857
162
+ name: Cosine Precision@3
163
+ - type: cosine_precision@5
164
+ value: 0.174
165
+ name: Cosine Precision@5
166
+ - type: cosine_precision@10
167
+ value: 0.09085714285714284
168
+ name: Cosine Precision@10
169
+ - type: cosine_recall@1
170
+ value: 0.6914285714285714
171
+ name: Cosine Recall@1
172
+ - type: cosine_recall@3
173
+ value: 0.8271428571428572
174
+ name: Cosine Recall@3
175
+ - type: cosine_recall@5
176
+ value: 0.87
177
+ name: Cosine Recall@5
178
+ - type: cosine_recall@10
179
+ value: 0.9085714285714286
180
+ name: Cosine Recall@10
181
+ - type: cosine_ndcg@10
182
+ value: 0.8029523922190992
183
+ name: Cosine Ndcg@10
184
+ - type: cosine_mrr@10
185
+ value: 0.7687732426303853
186
+ name: Cosine Mrr@10
187
+ - type: cosine_map@100
188
+ value: 0.7717841390041892
189
+ name: Cosine Map@100
190
+ - task:
191
+ type: information-retrieval
192
+ name: Information Retrieval
193
+ dataset:
194
+ name: dim 256
195
+ type: dim_256
196
+ metrics:
197
+ - type: cosine_accuracy@1
198
+ value: 0.6871428571428572
199
+ name: Cosine Accuracy@1
200
+ - type: cosine_accuracy@3
201
+ value: 0.8285714285714286
202
+ name: Cosine Accuracy@3
203
+ - type: cosine_accuracy@5
204
+ value: 0.8728571428571429
205
+ name: Cosine Accuracy@5
206
+ - type: cosine_accuracy@10
207
+ value: 0.8985714285714286
208
+ name: Cosine Accuracy@10
209
+ - type: cosine_precision@1
210
+ value: 0.6871428571428572
211
+ name: Cosine Precision@1
212
+ - type: cosine_precision@3
213
+ value: 0.27619047619047615
214
+ name: Cosine Precision@3
215
+ - type: cosine_precision@5
216
+ value: 0.17457142857142854
217
+ name: Cosine Precision@5
218
+ - type: cosine_precision@10
219
+ value: 0.08985714285714284
220
+ name: Cosine Precision@10
221
+ - type: cosine_recall@1
222
+ value: 0.6871428571428572
223
+ name: Cosine Recall@1
224
+ - type: cosine_recall@3
225
+ value: 0.8285714285714286
226
+ name: Cosine Recall@3
227
+ - type: cosine_recall@5
228
+ value: 0.8728571428571429
229
+ name: Cosine Recall@5
230
+ - type: cosine_recall@10
231
+ value: 0.8985714285714286
232
+ name: Cosine Recall@10
233
+ - type: cosine_ndcg@10
234
+ value: 0.7983704009707536
235
+ name: Cosine Ndcg@10
236
+ - type: cosine_mrr@10
237
+ value: 0.7655901360544215
238
+ name: Cosine Mrr@10
239
+ - type: cosine_map@100
240
+ value: 0.7693376855880492
241
+ name: Cosine Map@100
242
+ - task:
243
+ type: information-retrieval
244
+ name: Information Retrieval
245
+ dataset:
246
+ name: dim 128
247
+ type: dim_128
248
+ metrics:
249
+ - type: cosine_accuracy@1
250
+ value: 0.6671428571428571
251
+ name: Cosine Accuracy@1
252
+ - type: cosine_accuracy@3
253
+ value: 0.8185714285714286
254
+ name: Cosine Accuracy@3
255
+ - type: cosine_accuracy@5
256
+ value: 0.8557142857142858
257
+ name: Cosine Accuracy@5
258
+ - type: cosine_accuracy@10
259
+ value: 0.8957142857142857
260
+ name: Cosine Accuracy@10
261
+ - type: cosine_precision@1
262
+ value: 0.6671428571428571
263
+ name: Cosine Precision@1
264
+ - type: cosine_precision@3
265
+ value: 0.27285714285714285
266
+ name: Cosine Precision@3
267
+ - type: cosine_precision@5
268
+ value: 0.17114285714285712
269
+ name: Cosine Precision@5
270
+ - type: cosine_precision@10
271
+ value: 0.08957142857142855
272
+ name: Cosine Precision@10
273
+ - type: cosine_recall@1
274
+ value: 0.6671428571428571
275
+ name: Cosine Recall@1
276
+ - type: cosine_recall@3
277
+ value: 0.8185714285714286
278
+ name: Cosine Recall@3
279
+ - type: cosine_recall@5
280
+ value: 0.8557142857142858
281
+ name: Cosine Recall@5
282
+ - type: cosine_recall@10
283
+ value: 0.8957142857142857
284
+ name: Cosine Recall@10
285
+ - type: cosine_ndcg@10
286
+ value: 0.7849638501826605
287
+ name: Cosine Ndcg@10
288
+ - type: cosine_mrr@10
289
+ value: 0.7491031746031743
290
+ name: Cosine Mrr@10
291
+ - type: cosine_map@100
292
+ value: 0.752516331310788
293
+ name: Cosine Map@100
294
+ - task:
295
+ type: information-retrieval
296
+ name: Information Retrieval
297
+ dataset:
298
+ name: dim 64
299
+ type: dim_64
300
+ metrics:
301
+ - type: cosine_accuracy@1
302
+ value: 0.6528571428571428
303
+ name: Cosine Accuracy@1
304
+ - type: cosine_accuracy@3
305
+ value: 0.7871428571428571
306
+ name: Cosine Accuracy@3
307
+ - type: cosine_accuracy@5
308
+ value: 0.8271428571428572
309
+ name: Cosine Accuracy@5
310
+ - type: cosine_accuracy@10
311
+ value: 0.8771428571428571
312
+ name: Cosine Accuracy@10
313
+ - type: cosine_precision@1
314
+ value: 0.6528571428571428
315
+ name: Cosine Precision@1
316
+ - type: cosine_precision@3
317
+ value: 0.2623809523809524
318
+ name: Cosine Precision@3
319
+ - type: cosine_precision@5
320
+ value: 0.1654285714285714
321
+ name: Cosine Precision@5
322
+ - type: cosine_precision@10
323
+ value: 0.0877142857142857
324
+ name: Cosine Precision@10
325
+ - type: cosine_recall@1
326
+ value: 0.6528571428571428
327
+ name: Cosine Recall@1
328
+ - type: cosine_recall@3
329
+ value: 0.7871428571428571
330
+ name: Cosine Recall@3
331
+ - type: cosine_recall@5
332
+ value: 0.8271428571428572
333
+ name: Cosine Recall@5
334
+ - type: cosine_recall@10
335
+ value: 0.8771428571428571
336
+ name: Cosine Recall@10
337
+ - type: cosine_ndcg@10
338
+ value: 0.7639694587103518
339
+ name: Cosine Ndcg@10
340
+ - type: cosine_mrr@10
341
+ value: 0.7279750566893419
342
+ name: Cosine Mrr@10
343
+ - type: cosine_map@100
344
+ value: 0.7317631790989764
345
+ name: Cosine Map@100
346
+ ---
347
+
348
+ # Nomic Embed 1.5 Financial Matryoshka
349
+
350
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
351
+
352
+ ## Model Details
353
+
354
+ ### Model Description
355
+ - **Model Type:** Sentence Transformer
356
+ - **Base model:** [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) <!-- at revision b0753ae76394dd36bcfb912a46018088bca48be0 -->
357
+ - **Maximum Sequence Length:** 8192 tokens
358
+ - **Output Dimensionality:** 768 tokens
359
+ - **Similarity Function:** Cosine Similarity
360
+ <!-- - **Training Dataset:** Unknown -->
361
+ - **Language:** en
362
+ - **License:** apache-2.0
363
+
364
+ ### Model Sources
365
+
366
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
367
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
368
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
369
+
370
+ ### Full Model Architecture
371
+
372
+ ```
373
+ SentenceTransformer(
374
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
375
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
376
+ )
377
+ ```
378
+
379
+ ## Usage
380
+
381
+ ### Direct Usage (Sentence Transformers)
382
+
383
+ First install the Sentence Transformers library:
384
+
385
+ ```bash
386
+ pip install -U sentence-transformers
387
+ ```
388
+
389
+ Then you can load this model and run inference.
390
+ ```python
391
+ from sentence_transformers import SentenceTransformer
392
+
393
+ # Download from the 🤗 Hub
394
+ model = SentenceTransformer("venkateshmurugadas/nomic-v1.5-financial-matryoshka")
395
+ # Run inference
396
+ sentences = [
397
+ 'The company may issue debt or equity securities occasionally to provide additional liquidity or pursue opportunities to enhance its long-term competitive position while maintaining a strong balance sheet. ',
398
+ 'What might the company do to increase liquidity or pursue long-term competitive advantages while managing a strong balance sheet?',
399
+ 'What types of technologies does the Mortgage Technology segment employ to enhance operational efficiency?',
400
+ ]
401
+ embeddings = model.encode(sentences)
402
+ print(embeddings.shape)
403
+ # [3, 768]
404
+
405
+ # Get the similarity scores for the embeddings
406
+ similarities = model.similarity(embeddings, embeddings)
407
+ print(similarities.shape)
408
+ # [3, 3]
409
+ ```
410
+
411
+ <!--
412
+ ### Direct Usage (Transformers)
413
+
414
+ <details><summary>Click to see the direct usage in Transformers</summary>
415
+
416
+ </details>
417
+ -->
418
+
419
+ <!--
420
+ ### Downstream Usage (Sentence Transformers)
421
+
422
+ You can finetune this model on your own dataset.
423
+
424
+ <details><summary>Click to expand</summary>
425
+
426
+ </details>
427
+ -->
428
+
429
+ <!--
430
+ ### Out-of-Scope Use
431
+
432
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
433
+ -->
434
+
435
+ ## Evaluation
436
+
437
+ ### Metrics
438
+
439
+ #### Information Retrieval
440
+ * Dataset: `dim_768`
441
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
442
+
443
+ | Metric | Value |
444
+ |:--------------------|:-----------|
445
+ | cosine_accuracy@1 | 0.6929 |
446
+ | cosine_accuracy@3 | 0.8229 |
447
+ | cosine_accuracy@5 | 0.87 |
448
+ | cosine_accuracy@10 | 0.9071 |
449
+ | cosine_precision@1 | 0.6929 |
450
+ | cosine_precision@3 | 0.2743 |
451
+ | cosine_precision@5 | 0.174 |
452
+ | cosine_precision@10 | 0.0907 |
453
+ | cosine_recall@1 | 0.6929 |
454
+ | cosine_recall@3 | 0.8229 |
455
+ | cosine_recall@5 | 0.87 |
456
+ | cosine_recall@10 | 0.9071 |
457
+ | cosine_ndcg@10 | 0.803 |
458
+ | cosine_mrr@10 | 0.7693 |
459
+ | **cosine_map@100** | **0.7724** |
460
+
461
+ #### Information Retrieval
462
+ * Dataset: `dim_512`
463
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
464
+
465
+ | Metric | Value |
466
+ |:--------------------|:-----------|
467
+ | cosine_accuracy@1 | 0.6914 |
468
+ | cosine_accuracy@3 | 0.8271 |
469
+ | cosine_accuracy@5 | 0.87 |
470
+ | cosine_accuracy@10 | 0.9086 |
471
+ | cosine_precision@1 | 0.6914 |
472
+ | cosine_precision@3 | 0.2757 |
473
+ | cosine_precision@5 | 0.174 |
474
+ | cosine_precision@10 | 0.0909 |
475
+ | cosine_recall@1 | 0.6914 |
476
+ | cosine_recall@3 | 0.8271 |
477
+ | cosine_recall@5 | 0.87 |
478
+ | cosine_recall@10 | 0.9086 |
479
+ | cosine_ndcg@10 | 0.803 |
480
+ | cosine_mrr@10 | 0.7688 |
481
+ | **cosine_map@100** | **0.7718** |
482
+
483
+ #### Information Retrieval
484
+ * Dataset: `dim_256`
485
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
486
+
487
+ | Metric | Value |
488
+ |:--------------------|:-----------|
489
+ | cosine_accuracy@1 | 0.6871 |
490
+ | cosine_accuracy@3 | 0.8286 |
491
+ | cosine_accuracy@5 | 0.8729 |
492
+ | cosine_accuracy@10 | 0.8986 |
493
+ | cosine_precision@1 | 0.6871 |
494
+ | cosine_precision@3 | 0.2762 |
495
+ | cosine_precision@5 | 0.1746 |
496
+ | cosine_precision@10 | 0.0899 |
497
+ | cosine_recall@1 | 0.6871 |
498
+ | cosine_recall@3 | 0.8286 |
499
+ | cosine_recall@5 | 0.8729 |
500
+ | cosine_recall@10 | 0.8986 |
501
+ | cosine_ndcg@10 | 0.7984 |
502
+ | cosine_mrr@10 | 0.7656 |
503
+ | **cosine_map@100** | **0.7693** |
504
+
505
+ #### Information Retrieval
506
+ * Dataset: `dim_128`
507
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
508
+
509
+ | Metric | Value |
510
+ |:--------------------|:-----------|
511
+ | cosine_accuracy@1 | 0.6671 |
512
+ | cosine_accuracy@3 | 0.8186 |
513
+ | cosine_accuracy@5 | 0.8557 |
514
+ | cosine_accuracy@10 | 0.8957 |
515
+ | cosine_precision@1 | 0.6671 |
516
+ | cosine_precision@3 | 0.2729 |
517
+ | cosine_precision@5 | 0.1711 |
518
+ | cosine_precision@10 | 0.0896 |
519
+ | cosine_recall@1 | 0.6671 |
520
+ | cosine_recall@3 | 0.8186 |
521
+ | cosine_recall@5 | 0.8557 |
522
+ | cosine_recall@10 | 0.8957 |
523
+ | cosine_ndcg@10 | 0.785 |
524
+ | cosine_mrr@10 | 0.7491 |
525
+ | **cosine_map@100** | **0.7525** |
526
+
527
+ #### Information Retrieval
528
+ * Dataset: `dim_64`
529
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
530
+
531
+ | Metric | Value |
532
+ |:--------------------|:-----------|
533
+ | cosine_accuracy@1 | 0.6529 |
534
+ | cosine_accuracy@3 | 0.7871 |
535
+ | cosine_accuracy@5 | 0.8271 |
536
+ | cosine_accuracy@10 | 0.8771 |
537
+ | cosine_precision@1 | 0.6529 |
538
+ | cosine_precision@3 | 0.2624 |
539
+ | cosine_precision@5 | 0.1654 |
540
+ | cosine_precision@10 | 0.0877 |
541
+ | cosine_recall@1 | 0.6529 |
542
+ | cosine_recall@3 | 0.7871 |
543
+ | cosine_recall@5 | 0.8271 |
544
+ | cosine_recall@10 | 0.8771 |
545
+ | cosine_ndcg@10 | 0.764 |
546
+ | cosine_mrr@10 | 0.728 |
547
+ | **cosine_map@100** | **0.7318** |
548
+
549
+ <!--
550
+ ## Bias, Risks and Limitations
551
+
552
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
553
+ -->
554
+
555
+ <!--
556
+ ### Recommendations
557
+
558
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
559
+ -->
560
+
561
+ ## Training Details
562
+
563
+ ### Training Dataset
564
+
565
+ #### Unnamed Dataset
566
+
567
+
568
+ * Size: 6,300 training samples
569
+ * Columns: <code>positive</code> and <code>anchor</code>
570
+ * Approximate statistics based on the first 1000 samples:
571
+ | | positive | anchor |
572
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
573
+ | type | string | string |
574
+ | details | <ul><li>min: 8 tokens</li><li>mean: 46.46 tokens</li><li>max: 371 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.45 tokens</li><li>max: 41 tokens</li></ul> |
575
+ * Samples:
576
+ | positive | anchor |
577
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------|
578
+ | <code>We evaluate uncertain tax positions periodically, considering changes in facts and circumstances, such as new regulations or recent judicial opinions, as well as the status of audit activities by taxing authorities.</code> | <code>How are changes to a company's uncertain tax positions evaluated?</code> |
579
+ | <code>During 2022 and 2023, our operating margin was impacted by increased wage rates. During 2022, our gross margin was impacted by higher air freight costs as a result of global supply chain disruption.</code> | <code>What effects did inflation have on the company's operating results during 2022 and 2023?</code> |
580
+ | <code>To mitigate these developments, we are continually working to evolve our advertising systems to improve the performance of our ad products. We are developing privacy enhancing technologies to deliver relevant ads and measurement capabilities while reducing the amount of personal information we process, including by relying more on anonymized or aggregated third-party data. In addition, we are developing tools that enable marketers to share their data into our systems, as well as ad products that generate more valuable signals within our apps. More broadly, we also continue to innovate our advertising tools to help marketers prepare campaigns and connect with consumers, including developing growing formats such as Reels ads and our business messaging ad products. Across all of these efforts, we are making significant investments in artificial intelligence (AI), including generative AI, to improve our delivery, targeting, and measurement capabilities. Further, we are focused on driving onsite conversions in our business messaging ad products by developing new features and scaling existing features.</code> | <code>What technological solutions is the company developing to improve ad delivery?</code> |
581
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
582
+ ```json
583
+ {
584
+ "loss": "MultipleNegativesRankingLoss",
585
+ "matryoshka_dims": [
586
+ 768,
587
+ 512,
588
+ 256,
589
+ 128,
590
+ 64
591
+ ],
592
+ "matryoshka_weights": [
593
+ 1,
594
+ 1,
595
+ 1,
596
+ 1,
597
+ 1
598
+ ],
599
+ "n_dims_per_step": -1
600
+ }
601
+ ```
602
+
603
+ ### Training Hyperparameters
604
+ #### Non-Default Hyperparameters
605
+
606
+ - `eval_strategy`: epoch
607
+ - `per_device_train_batch_size`: 4
608
+ - `per_device_eval_batch_size`: 4
609
+ - `gradient_accumulation_steps`: 64
610
+ - `learning_rate`: 2e-05
611
+ - `num_train_epochs`: 4
612
+ - `lr_scheduler_type`: cosine
613
+ - `warmup_ratio`: 0.1
614
+ - `fp16`: True
615
+ - `tf32`: False
616
+ - `load_best_model_at_end`: True
617
+ - `optim`: adamw_torch_fused
618
+ - `batch_sampler`: no_duplicates
619
+
620
+ #### All Hyperparameters
621
+ <details><summary>Click to expand</summary>
622
+
623
+ - `overwrite_output_dir`: False
624
+ - `do_predict`: False
625
+ - `eval_strategy`: epoch
626
+ - `prediction_loss_only`: True
627
+ - `per_device_train_batch_size`: 4
628
+ - `per_device_eval_batch_size`: 4
629
+ - `per_gpu_train_batch_size`: None
630
+ - `per_gpu_eval_batch_size`: None
631
+ - `gradient_accumulation_steps`: 64
632
+ - `eval_accumulation_steps`: None
633
+ - `learning_rate`: 2e-05
634
+ - `weight_decay`: 0.0
635
+ - `adam_beta1`: 0.9
636
+ - `adam_beta2`: 0.999
637
+ - `adam_epsilon`: 1e-08
638
+ - `max_grad_norm`: 1.0
639
+ - `num_train_epochs`: 4
640
+ - `max_steps`: -1
641
+ - `lr_scheduler_type`: cosine
642
+ - `lr_scheduler_kwargs`: {}
643
+ - `warmup_ratio`: 0.1
644
+ - `warmup_steps`: 0
645
+ - `log_level`: passive
646
+ - `log_level_replica`: warning
647
+ - `log_on_each_node`: True
648
+ - `logging_nan_inf_filter`: True
649
+ - `save_safetensors`: True
650
+ - `save_on_each_node`: False
651
+ - `save_only_model`: False
652
+ - `restore_callback_states_from_checkpoint`: False
653
+ - `no_cuda`: False
654
+ - `use_cpu`: False
655
+ - `use_mps_device`: False
656
+ - `seed`: 42
657
+ - `data_seed`: None
658
+ - `jit_mode_eval`: False
659
+ - `use_ipex`: False
660
+ - `bf16`: False
661
+ - `fp16`: True
662
+ - `fp16_opt_level`: O1
663
+ - `half_precision_backend`: auto
664
+ - `bf16_full_eval`: False
665
+ - `fp16_full_eval`: False
666
+ - `tf32`: False
667
+ - `local_rank`: 0
668
+ - `ddp_backend`: None
669
+ - `tpu_num_cores`: None
670
+ - `tpu_metrics_debug`: False
671
+ - `debug`: []
672
+ - `dataloader_drop_last`: False
673
+ - `dataloader_num_workers`: 0
674
+ - `dataloader_prefetch_factor`: None
675
+ - `past_index`: -1
676
+ - `disable_tqdm`: False
677
+ - `remove_unused_columns`: True
678
+ - `label_names`: None
679
+ - `load_best_model_at_end`: True
680
+ - `ignore_data_skip`: False
681
+ - `fsdp`: []
682
+ - `fsdp_min_num_params`: 0
683
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
684
+ - `fsdp_transformer_layer_cls_to_wrap`: None
685
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
686
+ - `deepspeed`: None
687
+ - `label_smoothing_factor`: 0.0
688
+ - `optim`: adamw_torch_fused
689
+ - `optim_args`: None
690
+ - `adafactor`: False
691
+ - `group_by_length`: False
692
+ - `length_column_name`: length
693
+ - `ddp_find_unused_parameters`: None
694
+ - `ddp_bucket_cap_mb`: None
695
+ - `ddp_broadcast_buffers`: False
696
+ - `dataloader_pin_memory`: True
697
+ - `dataloader_persistent_workers`: False
698
+ - `skip_memory_metrics`: True
699
+ - `use_legacy_prediction_loop`: False
700
+ - `push_to_hub`: False
701
+ - `resume_from_checkpoint`: None
702
+ - `hub_model_id`: None
703
+ - `hub_strategy`: every_save
704
+ - `hub_private_repo`: False
705
+ - `hub_always_push`: False
706
+ - `gradient_checkpointing`: False
707
+ - `gradient_checkpointing_kwargs`: None
708
+ - `include_inputs_for_metrics`: False
709
+ - `eval_do_concat_batches`: True
710
+ - `fp16_backend`: auto
711
+ - `push_to_hub_model_id`: None
712
+ - `push_to_hub_organization`: None
713
+ - `mp_parameters`:
714
+ - `auto_find_batch_size`: False
715
+ - `full_determinism`: False
716
+ - `torchdynamo`: None
717
+ - `ray_scope`: last
718
+ - `ddp_timeout`: 1800
719
+ - `torch_compile`: False
720
+ - `torch_compile_backend`: None
721
+ - `torch_compile_mode`: None
722
+ - `dispatch_batches`: None
723
+ - `split_batches`: None
724
+ - `include_tokens_per_second`: False
725
+ - `include_num_input_tokens_seen`: False
726
+ - `neftune_noise_alpha`: None
727
+ - `optim_target_modules`: None
728
+ - `batch_eval_metrics`: False
729
+ - `batch_sampler`: no_duplicates
730
+ - `multi_dataset_batch_sampler`: proportional
731
+
732
+ </details>
733
+
734
+ ### Training Logs
735
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
736
+ |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
737
+ | 0.4063 | 10 | 0.1329 | - | - | - | - | - |
738
+ | 0.8127 | 20 | 0.0567 | - | - | - | - | - |
739
+ | 0.9752 | 24 | - | 0.7416 | 0.7604 | 0.7678 | 0.7249 | 0.7758 |
740
+ | 1.2190 | 30 | 0.0415 | - | - | - | - | - |
741
+ | 1.6254 | 40 | 0.0043 | - | - | - | - | - |
742
+ | 1.9911 | 49 | - | 0.7491 | 0.7648 | 0.7700 | 0.7315 | 0.7731 |
743
+ | 2.0317 | 50 | 0.0059 | - | - | - | - | - |
744
+ | 2.4381 | 60 | 0.0045 | - | - | - | - | - |
745
+ | 2.8444 | 70 | 0.0013 | - | - | - | - | - |
746
+ | **2.9663** | **73** | **-** | **0.7531** | **0.7703** | **0.7712** | **0.7327** | **0.7738** |
747
+ | 3.2508 | 80 | 0.0031 | - | - | - | - | - |
748
+ | 3.6571 | 90 | 0.0009 | - | - | - | - | - |
749
+ | 3.9010 | 96 | - | 0.7525 | 0.7693 | 0.7718 | 0.7318 | 0.7724 |
750
+
751
+ * The bold row denotes the saved checkpoint.
752
+
753
+ ### Framework Versions
754
+ - Python: 3.10.12
755
+ - Sentence Transformers: 3.0.1
756
+ - Transformers: 4.41.2
757
+ - PyTorch: 2.1.2+cu121
758
+ - Accelerate: 0.31.0
759
+ - Datasets: 2.19.1
760
+ - Tokenizers: 0.19.1
761
+
762
+ ## Citation
763
+
764
+ ### BibTeX
765
+
766
+ #### Sentence Transformers
767
+ ```bibtex
768
+ @inproceedings{reimers-2019-sentence-bert,
769
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
770
+ author = "Reimers, Nils and Gurevych, Iryna",
771
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
772
+ month = "11",
773
+ year = "2019",
774
+ publisher = "Association for Computational Linguistics",
775
+ url = "https://arxiv.org/abs/1908.10084",
776
+ }
777
+ ```
778
+
779
+ #### MatryoshkaLoss
780
+ ```bibtex
781
+ @misc{kusupati2024matryoshka,
782
+ title={Matryoshka Representation Learning},
783
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
784
+ year={2024},
785
+ eprint={2205.13147},
786
+ archivePrefix={arXiv},
787
+ primaryClass={cs.LG}
788
+ }
789
+ ```
790
+
791
+ #### MultipleNegativesRankingLoss
792
+ ```bibtex
793
+ @misc{henderson2017efficient,
794
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
795
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
796
+ year={2017},
797
+ eprint={1705.00652},
798
+ archivePrefix={arXiv},
799
+ primaryClass={cs.CL}
800
+ }
801
+ ```
802
+
803
+ <!--
804
+ ## Glossary
805
+
806
+ *Clearly define terms in order to be accessible across audiences.*
807
+ -->
808
+
809
+ <!--
810
+ ## Model Card Authors
811
+
812
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
813
+ -->
814
+
815
+ <!--
816
+ ## Model Card Contact
817
+
818
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
819
+ -->
config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "nomic-ai/nomic-embed-text-v1.5",
3
+ "activation_function": "swiglu",
4
+ "architectures": [
5
+ "NomicBertModel"
6
+ ],
7
+ "attn_pdrop": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "nomic-ai/nomic-bert-2048--configuration_hf_nomic_bert.NomicBertConfig",
10
+ "AutoModel": "nomic-ai/nomic-bert-2048--modeling_hf_nomic_bert.NomicBertModel",
11
+ "AutoModelForMaskedLM": "nomic-ai/nomic-bert-2048--modeling_hf_nomic_bert.NomicBertForPreTraining"
12
+ },
13
+ "bos_token_id": null,
14
+ "causal": false,
15
+ "dense_seq_output": true,
16
+ "embd_pdrop": 0.0,
17
+ "eos_token_id": null,
18
+ "fused_bias_fc": true,
19
+ "fused_dropout_add_ln": true,
20
+ "initializer_range": 0.02,
21
+ "layer_norm_epsilon": 1e-12,
22
+ "max_trained_positions": 2048,
23
+ "mlp_fc1_bias": false,
24
+ "mlp_fc2_bias": false,
25
+ "model_type": "nomic_bert",
26
+ "n_embd": 768,
27
+ "n_head": 12,
28
+ "n_inner": 3072,
29
+ "n_layer": 12,
30
+ "n_positions": 8192,
31
+ "pad_vocab_size_multiple": 64,
32
+ "parallel_block": false,
33
+ "parallel_block_tied_norm": false,
34
+ "prenorm": false,
35
+ "qkv_proj_bias": false,
36
+ "reorder_and_upcast_attn": false,
37
+ "resid_pdrop": 0.0,
38
+ "rotary_emb_base": 1000,
39
+ "rotary_emb_fraction": 1.0,
40
+ "rotary_emb_interleaved": false,
41
+ "rotary_emb_scale_base": null,
42
+ "rotary_scaling_factor": null,
43
+ "scale_attn_by_inverse_layer_idx": false,
44
+ "scale_attn_weights": true,
45
+ "summary_activation": null,
46
+ "summary_first_dropout": 0.0,
47
+ "summary_proj_to_labels": true,
48
+ "summary_type": "cls_index",
49
+ "summary_use_proj": true,
50
+ "torch_dtype": "float32",
51
+ "transformers_version": "4.41.2",
52
+ "type_vocab_size": 2,
53
+ "use_cache": true,
54
+ "use_flash_attn": true,
55
+ "use_rms_norm": false,
56
+ "use_xentropy": true,
57
+ "vocab_size": 30528
58
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3ffdac84df4039824c0975c9299af8f0237edc013e2f4d27042c97e8b193f61
3
+ size 546938168
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "mask_token": "[MASK]",
48
+ "model_max_length": 8192,
49
+ "pad_token": "[PAD]",
50
+ "sep_token": "[SEP]",
51
+ "strip_accents": null,
52
+ "tokenize_chinese_chars": true,
53
+ "tokenizer_class": "BertTokenizer",
54
+ "unk_token": "[UNK]"
55
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff