tomaarsen HF staff commited on
Commit
71bd449
·
verified ·
1 Parent(s): 8e44452

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - text-classification
8
+ - generated_from_trainer
9
+ - dataset_size:397226027
10
+ - loss:BinaryCrossEntropyLoss
11
+ base_model: microsoft/MiniLM-L12-H384-uncased
12
+ datasets:
13
+ - sentence-transformers/msmarco
14
+ pipeline_tag: text-classification
15
+ library_name: sentence-transformers
16
+ metrics:
17
+ - map
18
+ - mrr@10
19
+ - ndcg@10
20
+ model-index:
21
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
22
+ results: []
23
+ ---
24
+
25
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
26
+
27
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
28
+
29
+ ## Model Details
30
+
31
+ ### Model Description
32
+ - **Model Type:** Cross Encoder
33
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
34
+ - **Maximum Sequence Length:** 512 tokens
35
+ - **Number of Output Labels:** 1 label
36
+ - **Training Dataset:**
37
+ - [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled)
38
+ - **Language:** en
39
+ <!-- - **License:** Unknown -->
40
+
41
+ ### Model Sources
42
+
43
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
44
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
45
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
46
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
47
+
48
+ ## Usage
49
+
50
+ ### Direct Usage (Sentence Transformers)
51
+
52
+ First install the Sentence Transformers library:
53
+
54
+ ```bash
55
+ pip install -U sentence-transformers
56
+ ```
57
+
58
+ Then you can load this model and run inference.
59
+ ```python
60
+ from sentence_transformers import CrossEncoder
61
+
62
+ # Download from the 🤗 Hub
63
+ model = CrossEncoder("tomaarsen/reranker-MiniLM-L12-H384-uncased-msmarco-bce")
64
+ # Get scores for pairs of texts
65
+ pairs = [
66
+ ['what is a jewel yam', 'Wild Yam can be very beneficial for nervousness, restlessness and other nervous conditions. As a stimulant for increased bile flow, it can help to relieve hepatic congestion, bilious colic and gallstones.'],
67
+ ['hours of daytona', '24 Hours of Daytona. The 24 Hours of Daytona, currently known as the Rolex 24 At Daytona for sponsorship reasons, is a 24-hour sports car endurance race held annually at Daytona International Speedway in Daytona Beach, Florida. It is run on a 3.56-mile (5.73 km) combined road course, utilizing portions of the NASCAR tri-oval and an infield road course.'],
68
+ ['how much do autozone workers get paid', 'The typical AutoZone Sales Associate salary is $9. Sales Associate salaries at AutoZone can range from $7-$12. This estimate is based upon 59 AutoZone Sales Associate salary report(s) provided by employees or estimated based upon statistical methods. See all Sales Associate salaries to learn how this stacks up in the market.'],
69
+ ['what are the special sensory receptors', 'Sensory Neurons. Sensory Neurons: + add to my flashcards cite this term. You have a few different types of neurons in your body including interneurons, motor neurons, and sensory neurons. Sensory neurons (also known as Afferent Neurons) are responsible for bringing information from sensory receptors (like the nerves in your hand) to the central nervous system (spinal cord and brain).'],
70
+ ['how long to cook salmon on the grill', 'Place the bag with the marinade and salmon fillets in the refrigerator for 30 minutes. 1 Salmon, like all fish, is not as dense as red meats and poultry. 2 As a result, it does not need to be marinaded for long in order to absorb flavor.3 Remove the salmon from the refrigerator at least 10 minutes prior to cooking.lace the broiler pan 5 1/2 inches (14 cm) away from the top heating element and cook the salmon until done. 1 The salmon is done when you can effortlessly flake the fillets with a fork. 2 The center should be opaque.'],
71
+ ]
72
+ scores = model.predict(pairs)
73
+ print(scores.shape)
74
+ # (5,)
75
+
76
+ # Or rank different texts based on similarity to a single text
77
+ ranks = model.rank(
78
+ 'what is a jewel yam',
79
+ [
80
+ 'Wild Yam can be very beneficial for nervousness, restlessness and other nervous conditions. As a stimulant for increased bile flow, it can help to relieve hepatic congestion, bilious colic and gallstones.',
81
+ '24 Hours of Daytona. The 24 Hours of Daytona, currently known as the Rolex 24 At Daytona for sponsorship reasons, is a 24-hour sports car endurance race held annually at Daytona International Speedway in Daytona Beach, Florida. It is run on a 3.56-mile (5.73 km) combined road course, utilizing portions of the NASCAR tri-oval and an infield road course.',
82
+ 'The typical AutoZone Sales Associate salary is $9. Sales Associate salaries at AutoZone can range from $7-$12. This estimate is based upon 59 AutoZone Sales Associate salary report(s) provided by employees or estimated based upon statistical methods. See all Sales Associate salaries to learn how this stacks up in the market.',
83
+ 'Sensory Neurons. Sensory Neurons: + add to my flashcards cite this term. You have a few different types of neurons in your body including interneurons, motor neurons, and sensory neurons. Sensory neurons (also known as Afferent Neurons) are responsible for bringing information from sensory receptors (like the nerves in your hand) to the central nervous system (spinal cord and brain).',
84
+ 'Place the bag with the marinade and salmon fillets in the refrigerator for 30 minutes. 1 Salmon, like all fish, is not as dense as red meats and poultry. 2 As a result, it does not need to be marinaded for long in order to absorb flavor.3 Remove the salmon from the refrigerator at least 10 minutes prior to cooking.lace the broiler pan 5 1/2 inches (14 cm) away from the top heating element and cook the salmon until done. 1 The salmon is done when you can effortlessly flake the fillets with a fork. 2 The center should be opaque.',
85
+ ]
86
+ )
87
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
88
+ ```
89
+
90
+ <!--
91
+ ### Direct Usage (Transformers)
92
+
93
+ <details><summary>Click to see the direct usage in Transformers</summary>
94
+
95
+ </details>
96
+ -->
97
+
98
+ <!--
99
+ ### Downstream Usage (Sentence Transformers)
100
+
101
+ You can finetune this model on your own dataset.
102
+
103
+ <details><summary>Click to expand</summary>
104
+
105
+ </details>
106
+ -->
107
+
108
+ <!--
109
+ ### Out-of-Scope Use
110
+
111
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
112
+ -->
113
+
114
+ ## Evaluation
115
+
116
+ ### Metrics
117
+
118
+ #### Cross Encoder Reranking
119
+
120
+ * Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
121
+ * Evaluated with [<code>CERerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CERerankingEvaluator)
122
+
123
+ | Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
124
+ |:------------|:---------------------|:---------------------|:---------------------|
125
+ | map | 0.6127 (+0.1231) | 0.3432 (+0.0728) | 0.6921 (+0.2715) |
126
+ | mrr@10 | 0.6019 (+0.1244) | 0.5456 (+0.0457) | 0.7062 (+0.2795) |
127
+ | **ndcg@10** | **0.6648 (+0.1244)** | **0.3769 (+0.0519)** | **0.7462 (+0.2455)** |
128
+
129
+ #### Cross Encoder Nano BEIR
130
+
131
+ * Dataset: `NanoBEIR_mean`
132
+ * Evaluated with [<code>CENanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CENanoBEIREvaluator)
133
+
134
+ | Metric | Value |
135
+ |:------------|:---------------------|
136
+ | map | 0.5493 (+0.1558) |
137
+ | mrr@10 | 0.6179 (+0.1499) |
138
+ | **ndcg@10** | **0.5960 (+0.1406)** |
139
+
140
+ <!--
141
+ ## Bias, Risks and Limitations
142
+
143
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
144
+ -->
145
+
146
+ <!--
147
+ ### Recommendations
148
+
149
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
150
+ -->
151
+
152
+ ## Training Details
153
+
154
+ ### Training Dataset
155
+
156
+ #### ms-marco-shuffled
157
+
158
+ * Dataset: [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled) at [88847c6](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled/tree/88847c65252168a8c2504664289ef21a9df0ca74)
159
+ * Size: 397,226,027 training samples
160
+ * Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
161
+ * Approximate statistics based on the first 1000 samples:
162
+ | | query | passage | score |
163
+ |:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
164
+ | type | string | string | float |
165
+ | details | <ul><li>min: 10 characters</li><li>mean: 34.03 characters</li><li>max: 148 characters</li></ul> | <ul><li>min: 72 characters</li><li>mean: 345.31 characters</li><li>max: 913 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.52</li><li>max: 1.0</li></ul> |
166
+ * Samples:
167
+ | query | passage | score |
168
+ |:------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
169
+ | <code>when was ron marhofer founded?</code> | <code>What are the birthdays of Ron Shirley Bobby Brantley and Amy Shirley from Lizard Lick Towing? Ron Shirley's birthday is April 13. His wife Amy Shirley celebrates her birthday on May 4, and Bobby Brantley's birthday is September 26.</code> | <code>0.0</code> |
170
+ | <code>what should the average medical assistant make</code> | <code>For example, the Bureau of Labor Statistics reports that as of May 2014, medical assistant jobs located in Offices of Physicians paid about $31,230 a year on average c. These roles (in Offices of Physicians) made up a large portion of medical assistant jobs, totaling 349,370 positions as of May 2014 c. General Medical and Surgical hospitals were another large employer, carrying 85,040 medical assistants c on their payrolls.</code> | <code>1.0</code> |
171
+ | <code>what type of rock form in warm ocean bottoms</code> | <code>Second, sedimentary rocks form on the bottom of the ocean when particles rain down from the surface. These particles can become compressed and cemented to form limestone. Fossilized sea creatures are often found in these rocks. Most of the mountains around Las Vegas are composed of sedimentary rocks. Red Rock Canyon (photo) provides a spectacular example of both types: the gray mountains are limestone, and the red-and-white hills are sandstone.</code> | <code>1.0</code> |
172
+ * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
173
+ ```json
174
+ {
175
+ "activation_fct": "torch.nn.modules.linear.Identity",
176
+ "pos_weight": null
177
+ }
178
+ ```
179
+
180
+ ### Evaluation Dataset
181
+
182
+ #### ms-marco-shuffled
183
+
184
+ * Dataset: [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled) at [88847c6](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled/tree/88847c65252168a8c2504664289ef21a9df0ca74)
185
+ * Size: 397,226,027 evaluation samples
186
+ * Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
187
+ * Approximate statistics based on the first 1000 samples:
188
+ | | query | passage | score |
189
+ |:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|
190
+ | type | string | string | float |
191
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.94 characters</li><li>max: 164 characters</li></ul> | <ul><li>min: 58 characters</li><li>mean: 346.39 characters</li><li>max: 917 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
192
+ * Samples:
193
+ | query | passage | score |
194
+ |:---------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
195
+ | <code>what is a jewel yam</code> | <code>Wild Yam can be very beneficial for nervousness, restlessness and other nervous conditions. As a stimulant for increased bile flow, it can help to relieve hepatic congestion, bilious colic and gallstones.</code> | <code>0.0</code> |
196
+ | <code>hours of daytona</code> | <code>24 Hours of Daytona. The 24 Hours of Daytona, currently known as the Rolex 24 At Daytona for sponsorship reasons, is a 24-hour sports car endurance race held annually at Daytona International Speedway in Daytona Beach, Florida. It is run on a 3.56-mile (5.73 km) combined road course, utilizing portions of the NASCAR tri-oval and an infield road course.</code> | <code>1.0</code> |
197
+ | <code>how much do autozone workers get paid</code> | <code>The typical AutoZone Sales Associate salary is $9. Sales Associate salaries at AutoZone can range from $7-$12. This estimate is based upon 59 AutoZone Sales Associate salary report(s) provided by employees or estimated based upon statistical methods. See all Sales Associate salaries to learn how this stacks up in the market.</code> | <code>1.0</code> |
198
+ * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
199
+ ```json
200
+ {
201
+ "activation_fct": "torch.nn.modules.linear.Identity",
202
+ "pos_weight": null
203
+ }
204
+ ```
205
+
206
+ ### Training Hyperparameters
207
+ #### Non-Default Hyperparameters
208
+
209
+ - `eval_strategy`: steps
210
+ - `per_device_train_batch_size`: 64
211
+ - `per_device_eval_batch_size`: 64
212
+ - `learning_rate`: 2e-05
213
+ - `num_train_epochs`: 1
214
+ - `warmup_ratio`: 0.1
215
+ - `seed`: 12
216
+ - `bf16`: True
217
+ - `dataloader_num_workers`: 4
218
+ - `load_best_model_at_end`: True
219
+
220
+ #### All Hyperparameters
221
+ <details><summary>Click to expand</summary>
222
+
223
+ - `overwrite_output_dir`: False
224
+ - `do_predict`: False
225
+ - `eval_strategy`: steps
226
+ - `prediction_loss_only`: True
227
+ - `per_device_train_batch_size`: 64
228
+ - `per_device_eval_batch_size`: 64
229
+ - `per_gpu_train_batch_size`: None
230
+ - `per_gpu_eval_batch_size`: None
231
+ - `gradient_accumulation_steps`: 1
232
+ - `eval_accumulation_steps`: None
233
+ - `torch_empty_cache_steps`: None
234
+ - `learning_rate`: 2e-05
235
+ - `weight_decay`: 0.0
236
+ - `adam_beta1`: 0.9
237
+ - `adam_beta2`: 0.999
238
+ - `adam_epsilon`: 1e-08
239
+ - `max_grad_norm`: 1.0
240
+ - `num_train_epochs`: 1
241
+ - `max_steps`: -1
242
+ - `lr_scheduler_type`: linear
243
+ - `lr_scheduler_kwargs`: {}
244
+ - `warmup_ratio`: 0.1
245
+ - `warmup_steps`: 0
246
+ - `log_level`: passive
247
+ - `log_level_replica`: warning
248
+ - `log_on_each_node`: True
249
+ - `logging_nan_inf_filter`: True
250
+ - `save_safetensors`: True
251
+ - `save_on_each_node`: False
252
+ - `save_only_model`: False
253
+ - `restore_callback_states_from_checkpoint`: False
254
+ - `no_cuda`: False
255
+ - `use_cpu`: False
256
+ - `use_mps_device`: False
257
+ - `seed`: 12
258
+ - `data_seed`: None
259
+ - `jit_mode_eval`: False
260
+ - `use_ipex`: False
261
+ - `bf16`: True
262
+ - `fp16`: False
263
+ - `fp16_opt_level`: O1
264
+ - `half_precision_backend`: auto
265
+ - `bf16_full_eval`: False
266
+ - `fp16_full_eval`: False
267
+ - `tf32`: None
268
+ - `local_rank`: 0
269
+ - `ddp_backend`: None
270
+ - `tpu_num_cores`: None
271
+ - `tpu_metrics_debug`: False
272
+ - `debug`: []
273
+ - `dataloader_drop_last`: False
274
+ - `dataloader_num_workers`: 4
275
+ - `dataloader_prefetch_factor`: None
276
+ - `past_index`: -1
277
+ - `disable_tqdm`: False
278
+ - `remove_unused_columns`: True
279
+ - `label_names`: None
280
+ - `load_best_model_at_end`: True
281
+ - `ignore_data_skip`: False
282
+ - `fsdp`: []
283
+ - `fsdp_min_num_params`: 0
284
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
285
+ - `fsdp_transformer_layer_cls_to_wrap`: None
286
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
287
+ - `deepspeed`: None
288
+ - `label_smoothing_factor`: 0.0
289
+ - `optim`: adamw_torch
290
+ - `optim_args`: None
291
+ - `adafactor`: False
292
+ - `group_by_length`: False
293
+ - `length_column_name`: length
294
+ - `ddp_find_unused_parameters`: None
295
+ - `ddp_bucket_cap_mb`: None
296
+ - `ddp_broadcast_buffers`: False
297
+ - `dataloader_pin_memory`: True
298
+ - `dataloader_persistent_workers`: False
299
+ - `skip_memory_metrics`: True
300
+ - `use_legacy_prediction_loop`: False
301
+ - `push_to_hub`: False
302
+ - `resume_from_checkpoint`: None
303
+ - `hub_model_id`: None
304
+ - `hub_strategy`: every_save
305
+ - `hub_private_repo`: None
306
+ - `hub_always_push`: False
307
+ - `gradient_checkpointing`: False
308
+ - `gradient_checkpointing_kwargs`: None
309
+ - `include_inputs_for_metrics`: False
310
+ - `include_for_metrics`: []
311
+ - `eval_do_concat_batches`: True
312
+ - `fp16_backend`: auto
313
+ - `push_to_hub_model_id`: None
314
+ - `push_to_hub_organization`: None
315
+ - `mp_parameters`:
316
+ - `auto_find_batch_size`: False
317
+ - `full_determinism`: False
318
+ - `torchdynamo`: None
319
+ - `ray_scope`: last
320
+ - `ddp_timeout`: 1800
321
+ - `torch_compile`: False
322
+ - `torch_compile_backend`: None
323
+ - `torch_compile_mode`: None
324
+ - `dispatch_batches`: None
325
+ - `split_batches`: None
326
+ - `include_tokens_per_second`: False
327
+ - `include_num_input_tokens_seen`: False
328
+ - `neftune_noise_alpha`: None
329
+ - `optim_target_modules`: None
330
+ - `batch_eval_metrics`: False
331
+ - `eval_on_start`: False
332
+ - `use_liger_kernel`: False
333
+ - `eval_use_gather_object`: False
334
+ - `average_tokens_across_devices`: False
335
+ - `prompts`: None
336
+ - `batch_sampler`: batch_sampler
337
+ - `multi_dataset_batch_sampler`: proportional
338
+
339
+ </details>
340
+
341
+ ### Training Logs
342
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_mean_ndcg@10 |
343
+ |:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:---------------------:|
344
+ | -1 | -1 | - | - | 0.0324 (-0.5080) | 0.2439 (-0.0811) | 0.0361 (-0.4646) | 0.1041 (-0.3512) |
345
+ | 0.0000 | 1 | 0.6941 | - | - | - | - | - |
346
+ | 0.0322 | 1000 | 0.5117 | - | - | - | - | - |
347
+ | 0.0643 | 2000 | 0.2604 | - | - | - | - | - |
348
+ | 0.0965 | 3000 | 0.2258 | - | - | - | - | - |
349
+ | 0.1286 | 4000 | 0.2115 | - | - | - | - | - |
350
+ | 0.1608 | 5000 | 0.1995 | 0.1879 | 0.6145 (+0.0741) | 0.4002 (+0.0751) | 0.6970 (+0.1964) | 0.5706 (+0.1152) |
351
+ | 0.1930 | 6000 | 0.1924 | - | - | - | - | - |
352
+ | 0.2251 | 7000 | 0.1914 | - | - | - | - | - |
353
+ | 0.2573 | 8000 | 0.1859 | - | - | - | - | - |
354
+ | 0.2894 | 9000 | 0.1802 | - | - | - | - | - |
355
+ | 0.3216 | 10000 | 0.1791 | 0.1628 | 0.6311 (+0.0906) | 0.3795 (+0.0545) | 0.7347 (+0.2341) | 0.5818 (+0.1264) |
356
+ | 0.3538 | 11000 | 0.1732 | - | - | - | - | - |
357
+ | 0.3859 | 12000 | 0.1713 | - | - | - | - | - |
358
+ | 0.4181 | 13000 | 0.1756 | - | - | - | - | - |
359
+ | 0.4502 | 14000 | 0.1643 | - | - | - | - | - |
360
+ | 0.4824 | 15000 | 0.166 | 0.1531 | 0.6540 (+0.1136) | 0.3830 (+0.0579) | 0.7315 (+0.2309) | 0.5895 (+0.1341) |
361
+ | 0.5146 | 16000 | 0.161 | - | - | - | - | - |
362
+ | 0.5467 | 17000 | 0.1617 | - | - | - | - | - |
363
+ | 0.5789 | 18000 | 0.1612 | - | - | - | - | - |
364
+ | 0.6111 | 19000 | 0.1591 | - | - | - | - | - |
365
+ | **0.6432** | **20000** | **0.1599** | **0.1428** | **0.6648 (+0.1244)** | **0.3769 (+0.0519)** | **0.7462 (+0.2455)** | **0.5960 (+0.1406)** |
366
+ | 0.6754 | 21000 | 0.1599 | - | - | - | - | - |
367
+ | 0.7075 | 22000 | 0.1523 | - | - | - | - | - |
368
+ | 0.7397 | 23000 | 0.1525 | - | - | - | - | - |
369
+ | 0.7719 | 24000 | 0.1549 | - | - | - | - | - |
370
+ | 0.8040 | 25000 | 0.1515 | 0.1386 | 0.6682 (+0.1278) | 0.3686 (+0.0436) | 0.7481 (+0.2474) | 0.5950 (+0.1396) |
371
+ | 0.8362 | 26000 | 0.1556 | - | - | - | - | - |
372
+ | 0.8683 | 27000 | 0.1501 | - | - | - | - | - |
373
+ | 0.9005 | 28000 | 0.1522 | - | - | - | - | - |
374
+ | 0.9327 | 29000 | 0.1493 | - | - | - | - | - |
375
+ | 0.9648 | 30000 | 0.1509 | 0.1354 | 0.6805 (+0.1400) | 0.3593 (+0.0343) | 0.7439 (+0.2433) | 0.5946 (+0.1392) |
376
+ | 0.9970 | 31000 | 0.1481 | - | - | - | - | - |
377
+ | -1 | -1 | - | - | 0.6648 (+0.1244) | 0.3769 (+0.0519) | 0.7462 (+0.2455) | 0.5960 (+0.1406) |
378
+
379
+ * The bold row denotes the saved checkpoint.
380
+
381
+ ### Framework Versions
382
+ - Python: 3.11.10
383
+ - Sentence Transformers: 3.5.0.dev0
384
+ - Transformers: 4.49.0.dev0
385
+ - PyTorch: 2.6.0.dev20241112+cu121
386
+ - Accelerate: 1.2.0
387
+ - Datasets: 3.2.0
388
+ - Tokenizers: 0.21.0
389
+
390
+ ## Citation
391
+
392
+ ### BibTeX
393
+
394
+ #### Sentence Transformers
395
+ ```bibtex
396
+ @inproceedings{reimers-2019-sentence-bert,
397
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
398
+ author = "Reimers, Nils and Gurevych, Iryna",
399
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
400
+ month = "11",
401
+ year = "2019",
402
+ publisher = "Association for Computational Linguistics",
403
+ url = "https://arxiv.org/abs/1908.10084",
404
+ }
405
+ ```
406
+
407
+ <!--
408
+ ## Glossary
409
+
410
+ *Clearly define terms in order to be accessible across audiences.*
411
+ -->
412
+
413
+ <!--
414
+ ## Model Card Authors
415
+
416
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
417
+ -->
418
+
419
+ <!--
420
+ ## Model Card Contact
421
+
422
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
423
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.49.0.dev0",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4a894e1cd3f075c13c4ef2f391df3c1440aedf13d4338838f8965e199f64fd7
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff