Studeni commited on
Commit
34fafdc
·
verified ·
1 Parent(s): 21ec29e

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - text-classification
8
+ - generated_from_trainer
9
+ - dataset_size:82326
10
+ - loss:LambdaLoss
11
+ base_model: microsoft/MiniLM-L12-H384-uncased
12
+ datasets:
13
+ - microsoft/ms_marco
14
+ pipeline_tag: text-classification
15
+ library_name: sentence-transformers
16
+ metrics:
17
+ - map
18
+ - mrr@10
19
+ - ndcg@10
20
+ model-index:
21
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
22
+ results: []
23
+ ---
24
+
25
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
26
+
27
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
28
+
29
+ ## Model Details
30
+
31
+ ### Model Description
32
+ - **Model Type:** Cross Encoder
33
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
34
+ - **Maximum Sequence Length:** 512 tokens
35
+ - **Number of Output Labels:** 1 label
36
+ - **Training Dataset:**
37
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
38
+ - **Language:** en
39
+ <!-- - **License:** Unknown -->
40
+
41
+ ### Model Sources
42
+
43
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
44
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
45
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
46
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
47
+
48
+ ## Usage
49
+
50
+ ### Direct Usage (Sentence Transformers)
51
+
52
+ First install the Sentence Transformers library:
53
+
54
+ ```bash
55
+ pip install -U sentence-transformers
56
+ ```
57
+
58
+ Then you can load this model and run inference.
59
+ ```python
60
+ from sentence_transformers import CrossEncoder
61
+
62
+ # Download from the 🤗 Hub
63
+ model = CrossEncoder("Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss")
64
+ # Get scores for pairs of texts
65
+ pairs = [
66
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
67
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
68
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
69
+ ]
70
+ scores = model.predict(pairs)
71
+ print(scores.shape)
72
+ # (3,)
73
+
74
+ # Or rank different texts based on similarity to a single text
75
+ ranks = model.rank(
76
+ 'How many calories in an egg',
77
+ [
78
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
79
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
80
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
81
+ ]
82
+ )
83
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
84
+ ```
85
+
86
+ <!--
87
+ ### Direct Usage (Transformers)
88
+
89
+ <details><summary>Click to see the direct usage in Transformers</summary>
90
+
91
+ </details>
92
+ -->
93
+
94
+ <!--
95
+ ### Downstream Usage (Sentence Transformers)
96
+
97
+ You can finetune this model on your own dataset.
98
+
99
+ <details><summary>Click to expand</summary>
100
+
101
+ </details>
102
+ -->
103
+
104
+ <!--
105
+ ### Out-of-Scope Use
106
+
107
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
108
+ -->
109
+
110
+ ## Evaluation
111
+
112
+ ### Metrics
113
+
114
+ #### Cross Encoder Reranking
115
+
116
+ * Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
117
+ * Evaluated with [<code>CERerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CERerankingEvaluator)
118
+
119
+ | Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
120
+ |:------------|:---------------------|:---------------------|:---------------------|
121
+ | map | 0.5185 (+0.0289) | 0.3307 (+0.0603) | 0.5630 (+0.1423) |
122
+ | mrr@10 | 0.5102 (+0.0327) | 0.5466 (+0.0468) | 0.5730 (+0.1464) |
123
+ | **ndcg@10** | **0.5876 (+0.0472)** | **0.3699 (+0.0449)** | **0.6260 (+0.1253)** |
124
+
125
+ #### Cross Encoder Nano BEIR
126
+
127
+ * Dataset: `NanoBEIR_mean`
128
+ * Evaluated with [<code>CENanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CENanoBEIREvaluator)
129
+
130
+ | Metric | Value |
131
+ |:------------|:---------------------|
132
+ | map | 0.4707 (+0.0772) |
133
+ | mrr@10 | 0.5433 (+0.0753) |
134
+ | **ndcg@10** | **0.5278 (+0.0725)** |
135
+
136
+ <!--
137
+ ## Bias, Risks and Limitations
138
+
139
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
140
+ -->
141
+
142
+ <!--
143
+ ### Recommendations
144
+
145
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
146
+ -->
147
+
148
+ ## Training Details
149
+
150
+ ### Training Dataset
151
+
152
+ #### ms_marco
153
+
154
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
155
+ * Size: 82,326 training samples
156
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
157
+ * Approximate statistics based on the first 1000 samples:
158
+ | | query | docs | labels |
159
+ |:--------|:----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
160
+ | type | string | list | list |
161
+ | details | <ul><li>min: 9 characters</li><li>mean: 34.34 characters</li><li>max: 91 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
162
+ * Samples:
163
+ | query | docs | labels |
164
+ |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
165
+ | <code>what does tolterodine do</code> | <code>['Tolterodine (Detrol, Detrusitol) is an antimuscarinic drug that is used for symptomatic treatment of urinary incontinence. It is marketed by Pfizer in Canada and the United States by its brand name Detrol. In Egypt it is also found under the trade names Tolterodine by Sabaa and Incont L.A. by Adwia. Detrusor overactivity (DO, contraction of the muscular bladder wall) is the most common form of UI in older adults. It is characterized by uninhibited bladder contractions causing an uncontrollable urge to void. Urinary frequency, urge incontinence and nocturnal incontinence occur.', 'Tolterodine reduces spasms of the bladder muscles. Tolterodine is used to treat overactive bladder with symptoms of urinary frequency, urgency, and incontinence. Tolterodine may also be used for purposes not listed in this medication guide. You should not take this medication if you are allergic to tolterodine or fesoterodine (Toviaz), if you have untreated or uncontrolled narrow-angle glaucoma, or if you ha...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
166
+ | <code>why no dairy when taking ciprofloxacin</code> | <code>['Do not take ciprofloxacin with dairy products such as milk or yogurt, or with calcium-fortified juice. You may eat or drink these products as part of a regular meal, but do not use them alone when taking ciprofloxacin. They could make the medication less effective.', 'If your healthcare provider prescribes this medication, it is important to understand some precautions for using this drug. For instance, you should not take ciprofloxacin with dairy products alone (such as milk or yogurt) or with calcium-fortified juices (such as orange juice).', 'Do not take this medicine alone with milk, yogurt, or other dairy products. Do not drink any juice with calcium added when you take this medicine. It is okay to have dairy products or juice as part of a larger meal', 'Do not take ciprofloxacin with dairy products or calcium-fortified juice alone; you can, however, take ciprofloxacin with a meal that includes these...', 'You should not use ciprofloxacin if: 1 you are also taking tizanidine (Z...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
167
+ | <code>standard depth of countertops overhang</code> | <code>['Overhang. Countertops extend out from the face frame of the cabinets and just over the cabinet doors. This is called the overhang. Standard cabinet frames are 24 inches deep with 3/4 inch to 1 inch thick doors. Most countertops have a 1 inch overhang to make a standard depth of 25 inches. While there are many different materials to use for countertops, most come in a standard thickness of 1 1/2 inches.', 'Hanging Out on an Island. The standard overhang of an island countertop -- on the side designed to sit at and tuck stools underneath -- is 12 inches. If you plan to extend the counter farther, you need to add supports such as legs, or wood corbels or metal L-brackets that extend half the overhang’s distance.', 'The standard vanity counter top depth. Usually countertops overhang the doors by about one half of an inch. So, if your finished box size, including the door is twenty one and three quarters inches deep, then your finished top will be 22 1/4” in depth. The cut size should be ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
168
+ * Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
169
+ ```json
170
+ {
171
+ "weighing_scheme": "LambdaRankScheme",
172
+ "k": 10,
173
+ "sigma": 1.0,
174
+ "eps": 1e-10,
175
+ "pad_value": -1,
176
+ "reduction": "mean",
177
+ "reduction_log": "binary",
178
+ "activation_fct": null
179
+ }
180
+ ```
181
+
182
+ ### Evaluation Dataset
183
+
184
+ #### ms_marco
185
+
186
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
187
+ * Size: 82,326 evaluation samples
188
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
189
+ * Approximate statistics based on the first 1000 samples:
190
+ | | query | docs | labels |
191
+ |:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
192
+ | type | string | list | list |
193
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.63 characters</li><li>max: 99 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
194
+ * Samples:
195
+ | query | docs | labels |
196
+ |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
197
+ | <code>define monogenic trait</code> | <code>['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same *gene*, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
198
+ | <code>behavioral theory definition</code> | <code>["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
199
+ | <code>What is a disease that is pleiotropic?</code> | <code>['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
200
+ * Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
201
+ ```json
202
+ {
203
+ "weighing_scheme": "LambdaRankScheme",
204
+ "k": 10,
205
+ "sigma": 1.0,
206
+ "eps": 1e-10,
207
+ "pad_value": -1,
208
+ "reduction": "mean",
209
+ "reduction_log": "binary",
210
+ "activation_fct": null
211
+ }
212
+ ```
213
+
214
+ ### Training Hyperparameters
215
+ #### Non-Default Hyperparameters
216
+
217
+ - `eval_strategy`: steps
218
+ - `per_device_train_batch_size`: 6
219
+ - `per_device_eval_batch_size`: 6
220
+ - `torch_empty_cache_steps`: 2000
221
+ - `learning_rate`: 2e-05
222
+ - `warmup_ratio`: 0.1
223
+ - `seed`: 12
224
+ - `bf16`: True
225
+ - `load_best_model_at_end`: True
226
+
227
+ #### All Hyperparameters
228
+ <details><summary>Click to expand</summary>
229
+
230
+ - `overwrite_output_dir`: False
231
+ - `do_predict`: False
232
+ - `eval_strategy`: steps
233
+ - `prediction_loss_only`: True
234
+ - `per_device_train_batch_size`: 6
235
+ - `per_device_eval_batch_size`: 6
236
+ - `per_gpu_train_batch_size`: None
237
+ - `per_gpu_eval_batch_size`: None
238
+ - `gradient_accumulation_steps`: 1
239
+ - `eval_accumulation_steps`: None
240
+ - `torch_empty_cache_steps`: 2000
241
+ - `learning_rate`: 2e-05
242
+ - `weight_decay`: 0.0
243
+ - `adam_beta1`: 0.9
244
+ - `adam_beta2`: 0.999
245
+ - `adam_epsilon`: 1e-08
246
+ - `max_grad_norm`: 1.0
247
+ - `num_train_epochs`: 3
248
+ - `max_steps`: -1
249
+ - `lr_scheduler_type`: linear
250
+ - `lr_scheduler_kwargs`: {}
251
+ - `warmup_ratio`: 0.1
252
+ - `warmup_steps`: 0
253
+ - `log_level`: passive
254
+ - `log_level_replica`: warning
255
+ - `log_on_each_node`: True
256
+ - `logging_nan_inf_filter`: True
257
+ - `save_safetensors`: True
258
+ - `save_on_each_node`: False
259
+ - `save_only_model`: False
260
+ - `restore_callback_states_from_checkpoint`: False
261
+ - `no_cuda`: False
262
+ - `use_cpu`: False
263
+ - `use_mps_device`: False
264
+ - `seed`: 12
265
+ - `data_seed`: None
266
+ - `jit_mode_eval`: False
267
+ - `use_ipex`: False
268
+ - `bf16`: True
269
+ - `fp16`: False
270
+ - `fp16_opt_level`: O1
271
+ - `half_precision_backend`: auto
272
+ - `bf16_full_eval`: False
273
+ - `fp16_full_eval`: False
274
+ - `tf32`: None
275
+ - `local_rank`: 0
276
+ - `ddp_backend`: None
277
+ - `tpu_num_cores`: None
278
+ - `tpu_metrics_debug`: False
279
+ - `debug`: []
280
+ - `dataloader_drop_last`: False
281
+ - `dataloader_num_workers`: 0
282
+ - `dataloader_prefetch_factor`: None
283
+ - `past_index`: -1
284
+ - `disable_tqdm`: False
285
+ - `remove_unused_columns`: True
286
+ - `label_names`: None
287
+ - `load_best_model_at_end`: True
288
+ - `ignore_data_skip`: False
289
+ - `fsdp`: []
290
+ - `fsdp_min_num_params`: 0
291
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
292
+ - `fsdp_transformer_layer_cls_to_wrap`: None
293
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
294
+ - `deepspeed`: None
295
+ - `label_smoothing_factor`: 0.0
296
+ - `optim`: adamw_torch
297
+ - `optim_args`: None
298
+ - `adafactor`: False
299
+ - `group_by_length`: False
300
+ - `length_column_name`: length
301
+ - `ddp_find_unused_parameters`: None
302
+ - `ddp_bucket_cap_mb`: None
303
+ - `ddp_broadcast_buffers`: False
304
+ - `dataloader_pin_memory`: True
305
+ - `dataloader_persistent_workers`: False
306
+ - `skip_memory_metrics`: True
307
+ - `use_legacy_prediction_loop`: False
308
+ - `push_to_hub`: False
309
+ - `resume_from_checkpoint`: None
310
+ - `hub_model_id`: None
311
+ - `hub_strategy`: every_save
312
+ - `hub_private_repo`: None
313
+ - `hub_always_push`: False
314
+ - `gradient_checkpointing`: False
315
+ - `gradient_checkpointing_kwargs`: None
316
+ - `include_inputs_for_metrics`: False
317
+ - `include_for_metrics`: []
318
+ - `eval_do_concat_batches`: True
319
+ - `fp16_backend`: auto
320
+ - `push_to_hub_model_id`: None
321
+ - `push_to_hub_organization`: None
322
+ - `mp_parameters`:
323
+ - `auto_find_batch_size`: False
324
+ - `full_determinism`: False
325
+ - `torchdynamo`: None
326
+ - `ray_scope`: last
327
+ - `ddp_timeout`: 1800
328
+ - `torch_compile`: False
329
+ - `torch_compile_backend`: None
330
+ - `torch_compile_mode`: None
331
+ - `dispatch_batches`: None
332
+ - `split_batches`: None
333
+ - `include_tokens_per_second`: False
334
+ - `include_num_input_tokens_seen`: False
335
+ - `neftune_noise_alpha`: None
336
+ - `optim_target_modules`: None
337
+ - `batch_eval_metrics`: False
338
+ - `eval_on_start`: False
339
+ - `use_liger_kernel`: False
340
+ - `eval_use_gather_object`: False
341
+ - `average_tokens_across_devices`: False
342
+ - `prompts`: None
343
+ - `batch_sampler`: batch_sampler
344
+ - `multi_dataset_batch_sampler`: proportional
345
+
346
+ </details>
347
+
348
+ ### Training Logs
349
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_mean_ndcg@10 |
350
+ |:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:---------------------:|
351
+ | -1 | -1 | - | - | 0.1127 (-0.4278) | 0.2057 (-0.1193) | 0.0150 (-0.4857) | 0.1111 (-0.3443) |
352
+ | 0.0001 | 1 | 0.0767 | - | - | - | - | - |
353
+ | 0.0430 | 500 | 0.0864 | - | - | - | - | - |
354
+ | 0.0861 | 1000 | 0.0931 | - | - | - | - | - |
355
+ | 0.1291 | 1500 | 0.0896 | - | - | - | - | - |
356
+ | 0.1721 | 2000 | 0.0832 | 0.0786 | 0.4801 (-0.0603) | 0.3282 (+0.0031) | 0.5660 (+0.0654) | 0.4581 (+0.0028) |
357
+ | 0.2152 | 2500 | 0.0803 | - | - | - | - | - |
358
+ | 0.2582 | 3000 | 0.0776 | - | - | - | - | - |
359
+ | 0.3013 | 3500 | 0.0775 | - | - | - | - | - |
360
+ | 0.3443 | 4000 | 0.0761 | 0.0729 | 0.5320 (-0.0084) | 0.3207 (-0.0043) | 0.6709 (+0.1702) | 0.5079 (+0.0525) |
361
+ | 0.3873 | 4500 | 0.0769 | - | - | - | - | - |
362
+ | 0.4304 | 5000 | 0.0736 | - | - | - | - | - |
363
+ | 0.4734 | 5500 | 0.0733 | - | - | - | - | - |
364
+ | 0.5164 | 6000 | 0.0728 | 0.0717 | 0.5413 (+0.0009) | 0.3416 (+0.0165) | 0.6304 (+0.1297) | 0.5044 (+0.0491) |
365
+ | 0.5595 | 6500 | 0.0742 | - | - | - | - | - |
366
+ | 0.6025 | 7000 | 0.0716 | - | - | - | - | - |
367
+ | 0.6456 | 7500 | 0.0729 | - | - | - | - | - |
368
+ | 0.6886 | 8000 | 0.0717 | 0.0726 | 0.5766 (+0.0362) | 0.3229 (-0.0021) | 0.5439 (+0.0433) | 0.4811 (+0.0258) |
369
+ | 0.7316 | 8500 | 0.0724 | - | - | - | - | - |
370
+ | 0.7747 | 9000 | 0.0723 | - | - | - | - | - |
371
+ | 0.8177 | 9500 | 0.0696 | - | - | - | - | - |
372
+ | 0.8607 | 10000 | 0.0703 | 0.0688 | 0.5840 (+0.0436) | 0.3482 (+0.0231) | 0.6047 (+0.1040) | 0.5123 (+0.0569) |
373
+ | 0.9038 | 10500 | 0.0718 | - | - | - | - | - |
374
+ | 0.9468 | 11000 | 0.0709 | - | - | - | - | - |
375
+ | 0.9898 | 11500 | 0.0704 | - | - | - | - | - |
376
+ | 1.0329 | 12000 | 0.0666 | 0.0694 | 0.5643 (+0.0238) | 0.3048 (-0.0202) | 0.5767 (+0.0761) | 0.4819 (+0.0266) |
377
+ | 1.0759 | 12500 | 0.0665 | - | - | - | - | - |
378
+ | 1.1190 | 13000 | 0.0658 | - | - | - | - | - |
379
+ | 1.1620 | 13500 | 0.0655 | - | - | - | - | - |
380
+ | 1.2050 | 14000 | 0.0657 | 0.0698 | 0.5976 (+0.0572) | 0.3538 (+0.0287) | 0.6231 (+0.1224) | 0.5248 (+0.0695) |
381
+ | 1.2481 | 14500 | 0.0644 | - | - | - | - | - |
382
+ | 1.2911 | 15000 | 0.065 | - | - | - | - | - |
383
+ | 1.3341 | 15500 | 0.066 | - | - | - | - | - |
384
+ | 1.3772 | 16000 | 0.0649 | 0.0680 | 0.5993 (+0.0589) | 0.3362 (+0.0112) | 0.6127 (+0.1120) | 0.5161 (+0.0607) |
385
+ | 1.4202 | 16500 | 0.0655 | - | - | - | - | - |
386
+ | 1.4632 | 17000 | 0.0638 | - | - | - | - | - |
387
+ | 1.5063 | 17500 | 0.0676 | - | - | - | - | - |
388
+ | 1.5493 | 18000 | 0.0645 | 0.0672 | 0.5703 (+0.0299) | 0.3530 (+0.0280) | 0.5643 (+0.0637) | 0.4959 (+0.0405) |
389
+ | 1.5924 | 18500 | 0.0646 | - | - | - | - | - |
390
+ | 1.6354 | 19000 | 0.0636 | - | - | - | - | - |
391
+ | 1.6784 | 19500 | 0.0671 | - | - | - | - | - |
392
+ | 1.7215 | 20000 | 0.0646 | 0.0678 | 0.6072 (+0.0667) | 0.3586 (+0.0335) | 0.5840 (+0.0834) | 0.5166 (+0.0612) |
393
+ | 1.7645 | 20500 | 0.0656 | - | - | - | - | - |
394
+ | 1.8075 | 21000 | 0.0623 | - | - | - | - | - |
395
+ | 1.8506 | 21500 | 0.0649 | - | - | - | - | - |
396
+ | 1.8936 | 22000 | 0.0636 | 0.0672 | 0.5940 (+0.0536) | 0.3503 (+0.0252) | 0.5898 (+0.0891) | 0.5114 (+0.0560) |
397
+ | 1.9367 | 22500 | 0.0632 | - | - | - | - | - |
398
+ | 1.9797 | 23000 | 0.0646 | - | - | - | - | - |
399
+ | 2.0227 | 23500 | 0.0614 | - | - | - | - | - |
400
+ | 2.0658 | 24000 | 0.0572 | 0.0692 | 0.5824 (+0.0420) | 0.3678 (+0.0428) | 0.5803 (+0.0796) | 0.5102 (+0.0548) |
401
+ | 2.1088 | 24500 | 0.0568 | - | - | - | - | - |
402
+ | 2.1518 | 25000 | 0.0577 | - | - | - | - | - |
403
+ | 2.1949 | 25500 | 0.0575 | - | - | - | - | - |
404
+ | 2.2379 | 26000 | 0.0579 | 0.0704 | 0.5830 (+0.0425) | 0.3662 (+0.0411) | 0.5855 (+0.0849) | 0.5116 (+0.0562) |
405
+ | 2.2809 | 26500 | 0.0583 | - | - | - | - | - |
406
+ | 2.3240 | 27000 | 0.0572 | - | - | - | - | - |
407
+ | 2.3670 | 27500 | 0.058 | - | - | - | - | - |
408
+ | **2.4101** | **28000** | **0.0581** | **0.069** | **0.5876 (+0.0472)** | **0.3699 (+0.0449)** | **0.6260 (+0.1253)** | **0.5278 (+0.0725)** |
409
+ | 2.4531 | 28500 | 0.0563 | - | - | - | - | - |
410
+ | 2.4961 | 29000 | 0.0564 | - | - | - | - | - |
411
+ | 2.5392 | 29500 | 0.057 | - | - | - | - | - |
412
+ | 2.5822 | 30000 | 0.0568 | 0.0696 | 0.5862 (+0.0458) | 0.3753 (+0.0502) | 0.5947 (+0.0940) | 0.5187 (+0.0634) |
413
+ | 2.6252 | 30500 | 0.0574 | - | - | - | - | - |
414
+ | 2.6683 | 31000 | 0.0579 | - | - | - | - | - |
415
+ | 2.7113 | 31500 | 0.0577 | - | - | - | - | - |
416
+ | 2.7543 | 32000 | 0.056 | 0.0700 | 0.5598 (+0.0194) | 0.3712 (+0.0462) | 0.5826 (+0.0819) | 0.5045 (+0.0492) |
417
+ | 2.7974 | 32500 | 0.0579 | - | - | - | - | - |
418
+ | 2.8404 | 33000 | 0.0575 | - | - | - | - | - |
419
+ | 2.8835 | 33500 | 0.0567 | - | - | - | - | - |
420
+ | 2.9265 | 34000 | 0.0548 | 0.0700 | 0.5856 (+0.0452) | 0.3734 (+0.0484) | 0.5875 (+0.0869) | 0.5155 (+0.0601) |
421
+ | 2.9695 | 34500 | 0.059 | - | - | - | - | - |
422
+ | -1 | -1 | - | - | 0.5876 (+0.0472) | 0.3699 (+0.0449) | 0.6260 (+0.1253) | 0.5278 (+0.0725) |
423
+
424
+ * The bold row denotes the saved checkpoint.
425
+
426
+ ### Framework Versions
427
+ - Python: 3.10.13
428
+ - Sentence Transformers: 3.5.0.dev0
429
+ - Transformers: 4.48.1
430
+ - PyTorch: 2.5.1+cu124
431
+ - Accelerate: 1.3.0
432
+ - Datasets: 3.2.0
433
+ - Tokenizers: 0.21.0
434
+
435
+ ## Citation
436
+
437
+ ### BibTeX
438
+
439
+ #### Sentence Transformers
440
+ ```bibtex
441
+ @inproceedings{reimers-2019-sentence-bert,
442
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
443
+ author = "Reimers, Nils and Gurevych, Iryna",
444
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
445
+ month = "11",
446
+ year = "2019",
447
+ publisher = "Association for Computational Linguistics",
448
+ url = "https://arxiv.org/abs/1908.10084",
449
+ }
450
+ ```
451
+
452
+ #### LambdaLoss
453
+ ```bibtex
454
+ @article{wang2018lambdaloss,
455
+ title={The LambdaLoss Framework for Ranking Metric Optimization},
456
+ author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
457
+ journal={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
458
+ pages={1313--1322},
459
+ year={2018}
460
+ }
461
+ ```
462
+
463
+ <!--
464
+ ## Glossary
465
+
466
+ *Clearly define terms in order to be accessible across audiences.*
467
+ -->
468
+
469
+ <!--
470
+ ## Model Card Authors
471
+
472
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
473
+ -->
474
+
475
+ <!--
476
+ ## Model Card Contact
477
+
478
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
479
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.48.1",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d511c1db4da82b17b77f0a082e56adb6658cd99492f79d8b55079d1e78021b36
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff