File size: 32,629 Bytes
5a01f9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
---

language:
- en
tags:
- sentence-transformers
- cross-encoder
- text-classification
- generated_from_trainer
- dataset_size:78704
- loss:ListNetLoss
base_model: microsoft/MiniLM-L12-H384-uncased
datasets:
- microsoft/ms_marco
pipeline_tag: text-classification
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
co2_eq_emissions:
  emissions: 201.83156300124415
  energy_consumed: 0.519244982020273
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 1.659
  hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
  results: []
---


# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co./microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co./datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

## Model Details

### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co./microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
    - [ms_marco](https://huggingface.co./datasets/microsoft/ms_marco)
- **Language:** en
<!-- - **License:** Unknown -->

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co./models?library=sentence-transformers&other=cross-encoder)

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash

pip install -U sentence-transformers

```

Then you can load this model and run inference.
```python

from sentence_transformers import CrossEncoder



# Download from the 🤗 Hub

model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-sigmoid-scale-10")

# Get scores for pairs of texts

pairs = [

    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],

    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],

    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],

]

scores = model.predict(pairs)

print(scores.shape)

# (3,)



# Or rank different texts based on similarity to a single text

ranks = model.rank(

    'How many calories in an egg',

    [

        'There are on average between 55 and 80 calories in an egg depending on its size.',

        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',

        'Most of the calories in an egg come from the yellow yolk in the center.',

    ]

)

# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Cross Encoder Reranking

* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator)

| Metric      | NanoMSMARCO          | NanoNFCorpus         | NanoNQ               |
|:------------|:---------------------|:---------------------|:---------------------|
| map         | 0.5122 (+0.0226)     | 0.3306 (+0.0696)     | 0.5716 (+0.1520)     |
| mrr@10      | 0.5044 (+0.0269)     | 0.5401 (+0.0403)     | 0.5754 (+0.1487)     |
| **ndcg@10** | **0.5840 (+0.0435)** | **0.3676 (+0.0425)** | **0.6431 (+0.1425)** |

#### Cross Encoder Nano BEIR

* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator)

| Metric      | Value                |
|:------------|:---------------------|
| map         | 0.4715 (+0.0814)     |
| mrr@10      | 0.5400 (+0.0720)     |
| **ndcg@10** | **0.5316 (+0.0762)** |

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Dataset

#### ms_marco



* Dataset: [ms_marco](https://huggingface.co./datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co./datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)

* Size: 78,704 training samples

* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>

* Approximate statistics based on the first 1000 samples:

  |         | query                                                                                          | docs                                | labels                              |

  |:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|

  | type    | string                                                                                         | list                                | list                                |

  | details | <ul><li>min: 9 characters</li><li>mean: 33.95 characters</li><li>max: 103 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |

* Samples:

  | query                                                       | docs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | labels                            |

  |:------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|

  | <code>average temperature in may for denver colorado</code> | <code>["In most years, Denver averages a daily maximum temperature for May that's between 67 and 74 degrees Fahrenheit (19 to 23 degrees Celsius). The minimum temperature usually falls between 42 and 46 °F (5 to 8 °C). The days at Denver warm quickly during May.", 'The highest average temperature in Denver is July at 74 degrees. The coldest average temperature in Denver is December at 28.5 degrees. The most monthly precipitation in Denver occurs in August with 2.7 inches. The Denver weather information is based on the average of the previous 3-7 years of data.', "Climate for Denver, Colorado. Denver's coldest month is January when the average temperature overnight is 15.2°F. In July, the warmest month, the average day time temperature rises to 88.0°F.", "Average Temperatures for Denver. Denver's coldest month is January when the average temperature overnight is 15.2°F. In July, the warmest month, the average day time temperature rises to 88.0°F.", 'Location. This report describes the typical...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>what is brain surgery</code>                          | <code>['The term “brain surgery” refers to various medical procedures that involve repairing structural problems with the brain. There are numerous types of brain surgery. The type used is based on the area of the brain and condition being treated. Advances in medical technology let surgeons operate on portions of the brain without a single incision near the head. Brain surgery is a critical and complicated process. The type of brain surgery done depends highly on the condition being treated. For example, a brain aneurysm is typically repaired using an endoscope, but if it has ruptured, a craniotomy may be used.', 'Brain surgery is an operation to treat problems in the brain and surrounding structures. Before surgery, the hair on part of the scalp is shaved and the area is cleaned. The doctor makes a surgical cut through the scalp. The location of this cut depends on where the problem in the brain is located. The surgeon creates a hole in the skull and removes a bone flap.', 'Brain Surgery –...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>whos the girl in terminator genisys</code>            | <code>['Over the weekend, Terminator Genisys grossed $28.7 million to take the third spot at the box office, behind Jurassic World and Inside Out. FYI: Emilia is wearing Dior. 10+ pictures inside of Emilia Clarke and Arnold Schwarzenegger hitting the Terminator Genisys premiere in Japan…. Emilia Clarke is red hot while attending the premiere of her new film Terminator Genisys held at the Roppongi Hills Arena on Monday (July 6) in Tokyo, Japan.', "Jai Courtney, who plays Sarah's protector Kyle Reese (and eventual father to Jason Clarke 's John Connor), revealed that this role was the first time a character he played has fallen in love on screen. I had never fallen in love on screen before.", 'When John Connor (Jason Clarke), leader of the human resistance, sends Sgt. Kyle Reese (Jai Courtney) back to 1984 to protect Sarah Connor (Emilia Clarke) and safeguard the future, an unexpected turn of events creates a fractured timeline.', "On the run from the Terminator, Reese and Sarah share a night ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

* Loss: [<code>ListNetLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#listnetloss) with these parameters:

  ```json

  {

      "pad_value": -1,
      "activation_fct": "torch.nn.modules.activation.Sigmoid"

  }

  ```


### Evaluation Dataset

#### ms_marco



* Dataset: [ms_marco](https://huggingface.co./datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co./datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)

* Size: 1,000 evaluation samples

* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>

* Approximate statistics based on the first 1000 samples:

  |         | query                                                                                          | docs                                | labels                              |

  |:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|

  | type    | string                                                                                         | list                                | list                                |

  | details | <ul><li>min: 10 characters</li><li>mean: 33.53 characters</li><li>max: 95 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |

* Samples:

  | query                                      | docs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | labels                            |

  |:-------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|

  | <code>lpn salary richmond va</code>        | <code>['$52,000. Average LPN salaries for job postings in Richmond, VA are 1% higher than average LPN salaries for job postings nationwide.', 'A Licensed Practical Nurse (LPN) in Richmond, Virginia earns an average wage of $18.47 per hour. For the first five to ten years in this position, pay increases somewhat, but any additional experience does not have a big effect on pay. $27,369 - $48,339. (Median).', 'Virginia has a growing number of opportunities in the nursing field. Within the state, LPNs make up 25 % of nurses in the state. The Virginia LPN comfort score is 54. This takes into account the average LPN salary, average state salary and cost of living.', 'LPN Salaries and Career Outlook in Richmond. Many LPN graduates choose to work as licensed practical nurses after graduation. If you choose to follow that path and remain in Richmond, your job prospects are good. In 2010, of the 20,060 licensed practical nurses in Virginia, 370 were working in the greater Richmond area.', 'This chart ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>what is neutrogena</code>            | <code>["Neutrogena is an American brand of skin care, hair care and cosmetics, that is headquartered in Los Angeles, California. According to product advertising at their website, Neutrogena products are distributed in more than 70 countries. Neutrogena was founded in 1930 by Emanuel Stolaroff, and was originally a cosmetics company named Natone. In 1994 Johnson & Johnson acquired Neutrogena for $924 million, at a price of $35.25 per share. Johnson & Johnson's international network helped Neutrogena boost its sales and enter newer markets including India, South Africa, and China. Priced at a premium, Neutrogena products are distributed in over 70 countries.", 'Neutrogena also has retinol products for treating acne that have one thing going for them that most brands do not—they are in the kind of package that keeps the retinol cream fresh and active. Any kind of vitamin you dip out of jar will go bad almost as soon as you open the container due to oxidation.ost of the products Neutrogena make...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>why is lincoln a great leader</code> | <code>['His commitment to the rights of individuals was a cornerstone of his leadership style (Phillips, 1992). There have been many great leaders throughout the history of this great nation, but Abraham Lincoln is consistently mentioned as one of our greatest leaders. Although Lincoln possessed many characteristics of a great leader, probably his greatest leadership trait was his ability to communicate. Though Lincoln only had one year of formal education, he was able to master language and use his words to influence the people as a great public speaker, debater and as a humorist. Another part of Lincoln’s skills as a great communicator, was that he had a great capacity for learning to listen to different points of view. While president, he created a work environment where his cabinet members were able to disagree with his decisions without the threat of retaliation for doing so.', 'Expressed in his own words, here is Lincoln’s most luminous leadership insight by far: In order to win a man ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

* Loss: [<code>ListNetLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#listnetloss) with these parameters:

  ```json

  {

      "pad_value": -1,
      "activation_fct": "torch.nn.modules.activation.Sigmoid"

  }

  ```


### Training Hyperparameters
#### Non-Default Hyperparameters

- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `load_best_model_at_end`: True

#### All Hyperparameters
<details><summary>Click to expand</summary>

- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}

- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch

- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save

- `hub_private_repo`: None

- `hub_always_push`: False

- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`: 
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler

- `multi_dataset_batch_sampler`: proportional

</details>

### Training Logs
| Epoch      | Step      | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10  | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10       | NanoBEIR_R100_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------------:|
| -1         | -1        | -             | -               | 0.0355 (-0.5049)     | 0.2822 (-0.0429)     | 0.0479 (-0.4528)     | 0.1218 (-0.3335)           |
| 0.0001     | 1         | 1.8394        | -               | -                    | -                    | -                    | -                          |
| 0.0762     | 1000      | 2.0827        | -               | -                    | -                    | -                    | -                          |
| 0.1525     | 2000      | 2.0821        | -               | -                    | -                    | -                    | -                          |
| 0.2287     | 3000      | 2.078         | -               | -                    | -                    | -                    | -                          |
| 0.3049     | 4000      | 2.0776        | 2.0734          | 0.5755 (+0.0350)     | 0.3542 (+0.0292)     | 0.5827 (+0.0820)     | 0.5041 (+0.0487)           |
| 0.3812     | 5000      | 2.0735        | -               | -                    | -                    | -                    | -                          |
| 0.4574     | 6000      | 2.0719        | -               | -                    | -                    | -                    | -                          |
| 0.5336     | 7000      | 2.0703        | -               | -                    | -                    | -                    | -                          |
| 0.6098     | 8000      | 2.0712        | 2.0726          | 0.5748 (+0.0344)     | 0.3382 (+0.0131)     | 0.5966 (+0.0959)     | 0.5032 (+0.0478)           |
| 0.6861     | 9000      | 2.078         | -               | -                    | -                    | -                    | -                          |
| 0.7623     | 10000     | 2.0712        | -               | -                    | -                    | -                    | -                          |
| 0.8385     | 11000     | 2.0752        | -               | -                    | -                    | -                    | -                          |
| 0.9148     | 12000     | 2.0755        | 2.0716          | 0.5395 (-0.0009)     | 0.3428 (+0.0177)     | 0.5451 (+0.0444)     | 0.4758 (+0.0204)           |
| 0.9910     | 13000     | 2.0698        | -               | -                    | -                    | -                    | -                          |
| 1.0672     | 14000     | 2.072         | -               | -                    | -                    | -                    | -                          |
| 1.1435     | 15000     | 2.0704        | -               | -                    | -                    | -                    | -                          |
| 1.2197     | 16000     | 2.0693        | 2.0713          | 0.5538 (+0.0134)     | 0.3639 (+0.0388)     | 0.5766 (+0.0759)     | 0.4981 (+0.0427)           |
| 1.2959     | 17000     | 2.0716        | -               | -                    | -                    | -                    | -                          |
| 1.3722     | 18000     | 2.0628        | -               | -                    | -                    | -                    | -                          |
| 1.4484     | 19000     | 2.0691        | -               | -                    | -                    | -                    | -                          |
| **1.5246** | **20000** | **2.0659**    | **2.0733**      | **0.5840 (+0.0435)** | **0.3676 (+0.0425)** | **0.6431 (+0.1425)** | **0.5316 (+0.0762)**       |
| 1.6009     | 21000     | 2.0725        | -               | -                    | -                    | -                    | -                          |
| 1.6771     | 22000     | 2.0725        | -               | -                    | -                    | -                    | -                          |
| 1.7533     | 23000     | 2.0663        | -               | -                    | -                    | -                    | -                          |
| 1.8295     | 24000     | 2.0671        | 2.0715          | 0.5521 (+0.0117)     | 0.3339 (+0.0089)     | 0.6005 (+0.0999)     | 0.4955 (+0.0401)           |
| 1.9058     | 25000     | 2.0686        | -               | -                    | -                    | -                    | -                          |
| 1.9820     | 26000     | 2.0685        | -               | -                    | -                    | -                    | -                          |
| 2.0582     | 27000     | 2.068         | -               | -                    | -                    | -                    | -                          |
| 2.1345     | 28000     | 2.0622        | 2.0723          | 0.5721 (+0.0317)     | 0.3509 (+0.0258)     | 0.5870 (+0.0863)     | 0.5033 (+0.0480)           |
| 2.2107     | 29000     | 2.0664        | -               | -                    | -                    | -                    | -                          |
| 2.2869     | 30000     | 2.0616        | -               | -                    | -                    | -                    | -                          |
| 2.3632     | 31000     | 2.0661        | -               | -                    | -                    | -                    | -                          |
| 2.4394     | 32000     | 2.0638        | 2.0725          | 0.5620 (+0.0216)     | 0.3481 (+0.0230)     | 0.5899 (+0.0893)     | 0.5000 (+0.0447)           |
| 2.5156     | 33000     | 2.0643        | -               | -                    | -                    | -                    | -                          |
| 2.5919     | 34000     | 2.0611        | -               | -                    | -                    | -                    | -                          |
| 2.6681     | 35000     | 2.0609        | -               | -                    | -                    | -                    | -                          |
| 2.7443     | 36000     | 2.0658        | 2.0720          | 0.5846 (+0.0441)     | 0.3569 (+0.0318)     | 0.5759 (+0.0752)     | 0.5058 (+0.0504)           |
| 2.8206     | 37000     | 2.066         | -               | -                    | -                    | -                    | -                          |
| 2.8968     | 38000     | 2.0692        | -               | -                    | -                    | -                    | -                          |
| 2.9730     | 39000     | 2.0692        | -               | -                    | -                    | -                    | -                          |
| -1         | -1        | -             | -               | 0.5840 (+0.0435)     | 0.3676 (+0.0425)     | 0.6431 (+0.1425)     | 0.5316 (+0.0762)           |

* The bold row denotes the saved checkpoint.

### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.519 kWh
- **Carbon Emitted**: 0.202 kg of CO2
- **Hours Used**: 1.659 hours

### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB

### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.48.3
- PyTorch: 2.5.0+cu121
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0

## Citation

### BibTeX

#### Sentence Transformers
```bibtex

@inproceedings{reimers-2019-sentence-bert,

    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",

    author = "Reimers, Nils and Gurevych, Iryna",

    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",

    month = "11",

    year = "2019",

    publisher = "Association for Computational Linguistics",

    url = "https://arxiv.org/abs/1908.10084",

}

```

#### ListNetLoss
```bibtex

@inproceedings{cao2007learning,

    title={Learning to rank: from pairwise approach to listwise approach},

    author={Cao, Zhe and Qin, Tao and Liu, Tie-Yan and Tsai, Ming-Feng and Li, Hang},

    booktitle={Proceedings of the 24th international conference on Machine learning},

    pages={129--136},

    year={2007}

}

```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->