CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-sigmoid-scale-10")
# Get scores for pairs of texts
pairs = [
    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'How many calories in an egg',
    [
        'There are on average between 55 and 80 calories in an egg depending on its size.',
        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
        'Most of the calories in an egg come from the yellow yolk in the center.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

Metric NanoMSMARCO NanoNFCorpus NanoNQ
map 0.5122 (+0.0226) 0.3306 (+0.0696) 0.5716 (+0.1520)
mrr@10 0.5044 (+0.0269) 0.5401 (+0.0403) 0.5754 (+0.1487)
ndcg@10 0.5840 (+0.0435) 0.3676 (+0.0425) 0.6431 (+0.1425)

Cross Encoder Nano BEIR

Metric Value
map 0.4715 (+0.0814)
mrr@10 0.5400 (+0.0720)
ndcg@10 0.5316 (+0.0762)

Training Details

Training Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 78,704 training samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 9 characters
    • mean: 33.95 characters
    • max: 103 characters
    • size: 10 elements
    • size: 10 elements
  • Samples:
    query docs labels
    average temperature in may for denver colorado ["In most years, Denver averages a daily maximum temperature for May that's between 67 and 74 degrees Fahrenheit (19 to 23 degrees Celsius). The minimum temperature usually falls between 42 and 46 °F (5 to 8 °C). The days at Denver warm quickly during May.", 'The highest average temperature in Denver is July at 74 degrees. The coldest average temperature in Denver is December at 28.5 degrees. The most monthly precipitation in Denver occurs in August with 2.7 inches. The Denver weather information is based on the average of the previous 3-7 years of data.', "Climate for Denver, Colorado. Denver's coldest month is January when the average temperature overnight is 15.2°F. In July, the warmest month, the average day time temperature rises to 88.0°F.", "Average Temperatures for Denver. Denver's coldest month is January when the average temperature overnight is 15.2°F. In July, the warmest month, the average day time temperature rises to 88.0°F.", 'Location. This report describes the typical... [1, 0, 0, 0, 0, ...]
    what is brain surgery ['The term “brain surgery” refers to various medical procedures that involve repairing structural problems with the brain. There are numerous types of brain surgery. The type used is based on the area of the brain and condition being treated. Advances in medical technology let surgeons operate on portions of the brain without a single incision near the head. Brain surgery is a critical and complicated process. The type of brain surgery done depends highly on the condition being treated. For example, a brain aneurysm is typically repaired using an endoscope, but if it has ruptured, a craniotomy may be used.', 'Brain surgery is an operation to treat problems in the brain and surrounding structures. Before surgery, the hair on part of the scalp is shaved and the area is cleaned. The doctor makes a surgical cut through the scalp. The location of this cut depends on where the problem in the brain is located. The surgeon creates a hole in the skull and removes a bone flap.', 'Brain Surgery –... [1, 0, 0, 0, 0, ...]
    whos the girl in terminator genisys ['Over the weekend, Terminator Genisys grossed $28.7 million to take the third spot at the box office, behind Jurassic World and Inside Out. FYI: Emilia is wearing Dior. 10+ pictures inside of Emilia Clarke and Arnold Schwarzenegger hitting the Terminator Genisys premiere in Japan…. Emilia Clarke is red hot while attending the premiere of her new film Terminator Genisys held at the Roppongi Hills Arena on Monday (July 6) in Tokyo, Japan.', "Jai Courtney, who plays Sarah's protector Kyle Reese (and eventual father to Jason Clarke 's John Connor), revealed that this role was the first time a character he played has fallen in love on screen. I had never fallen in love on screen before.", 'When John Connor (Jason Clarke), leader of the human resistance, sends Sgt. Kyle Reese (Jai Courtney) back to 1984 to protect Sarah Connor (Emilia Clarke) and safeguard the future, an unexpected turn of events creates a fractured timeline.', "On the run from the Terminator, Reese and Sarah share a night ... [1, 0, 0, 0, 0, ...]
  • Loss: ListNetLoss with these parameters:
    {
        "pad_value": -1,
        "activation_fct": "torch.nn.modules.activation.Sigmoid"
    }
    

Evaluation Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 1,000 evaluation samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 10 characters
    • mean: 33.53 characters
    • max: 95 characters
    • size: 10 elements
    • size: 10 elements
  • Samples:
    query docs labels
    lpn salary richmond va ['$52,000. Average LPN salaries for job postings in Richmond, VA are 1% higher than average LPN salaries for job postings nationwide.', 'A Licensed Practical Nurse (LPN) in Richmond, Virginia earns an average wage of $18.47 per hour. For the first five to ten years in this position, pay increases somewhat, but any additional experience does not have a big effect on pay. $27,369 - $48,339. (Median).', 'Virginia has a growing number of opportunities in the nursing field. Within the state, LPNs make up 25 % of nurses in the state. The Virginia LPN comfort score is 54. This takes into account the average LPN salary, average state salary and cost of living.', 'LPN Salaries and Career Outlook in Richmond. Many LPN graduates choose to work as licensed practical nurses after graduation. If you choose to follow that path and remain in Richmond, your job prospects are good. In 2010, of the 20,060 licensed practical nurses in Virginia, 370 were working in the greater Richmond area.', 'This chart ... [1, 0, 0, 0, 0, ...]
    what is neutrogena ["Neutrogena is an American brand of skin care, hair care and cosmetics, that is headquartered in Los Angeles, California. According to product advertising at their website, Neutrogena products are distributed in more than 70 countries. Neutrogena was founded in 1930 by Emanuel Stolaroff, and was originally a cosmetics company named Natone. In 1994 Johnson & Johnson acquired Neutrogena for $924 million, at a price of $35.25 per share. Johnson & Johnson's international network helped Neutrogena boost its sales and enter newer markets including India, South Africa, and China. Priced at a premium, Neutrogena products are distributed in over 70 countries.", 'Neutrogena also has retinol products for treating acne that have one thing going for them that most brands do not—they are in the kind of package that keeps the retinol cream fresh and active. Any kind of vitamin you dip out of jar will go bad almost as soon as you open the container due to oxidation.ost of the products Neutrogena make... [1, 0, 0, 0, 0, ...]
    why is lincoln a great leader ['His commitment to the rights of individuals was a cornerstone of his leadership style (Phillips, 1992). There have been many great leaders throughout the history of this great nation, but Abraham Lincoln is consistently mentioned as one of our greatest leaders. Although Lincoln possessed many characteristics of a great leader, probably his greatest leadership trait was his ability to communicate. Though Lincoln only had one year of formal education, he was able to master language and use his words to influence the people as a great public speaker, debater and as a humorist. Another part of Lincoln’s skills as a great communicator, was that he had a great capacity for learning to listen to different points of view. While president, he created a work environment where his cabinet members were able to disagree with his decisions without the threat of retaliation for doing so.', 'Expressed in his own words, here is Lincoln’s most luminous leadership insight by far: In order to win a man ... [1, 0, 0, 0, 0, ...]
  • Loss: ListNetLoss with these parameters:
    {
        "pad_value": -1,
        "activation_fct": "torch.nn.modules.activation.Sigmoid"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 6
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 6
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_ndcg@10 NanoNFCorpus_ndcg@10 NanoNQ_ndcg@10 NanoBEIR_R100_mean_ndcg@10
-1 -1 - - 0.0355 (-0.5049) 0.2822 (-0.0429) 0.0479 (-0.4528) 0.1218 (-0.3335)
0.0001 1 1.8394 - - - - -
0.0762 1000 2.0827 - - - - -
0.1525 2000 2.0821 - - - - -
0.2287 3000 2.078 - - - - -
0.3049 4000 2.0776 2.0734 0.5755 (+0.0350) 0.3542 (+0.0292) 0.5827 (+0.0820) 0.5041 (+0.0487)
0.3812 5000 2.0735 - - - - -
0.4574 6000 2.0719 - - - - -
0.5336 7000 2.0703 - - - - -
0.6098 8000 2.0712 2.0726 0.5748 (+0.0344) 0.3382 (+0.0131) 0.5966 (+0.0959) 0.5032 (+0.0478)
0.6861 9000 2.078 - - - - -
0.7623 10000 2.0712 - - - - -
0.8385 11000 2.0752 - - - - -
0.9148 12000 2.0755 2.0716 0.5395 (-0.0009) 0.3428 (+0.0177) 0.5451 (+0.0444) 0.4758 (+0.0204)
0.9910 13000 2.0698 - - - - -
1.0672 14000 2.072 - - - - -
1.1435 15000 2.0704 - - - - -
1.2197 16000 2.0693 2.0713 0.5538 (+0.0134) 0.3639 (+0.0388) 0.5766 (+0.0759) 0.4981 (+0.0427)
1.2959 17000 2.0716 - - - - -
1.3722 18000 2.0628 - - - - -
1.4484 19000 2.0691 - - - - -
1.5246 20000 2.0659 2.0733 0.5840 (+0.0435) 0.3676 (+0.0425) 0.6431 (+0.1425) 0.5316 (+0.0762)
1.6009 21000 2.0725 - - - - -
1.6771 22000 2.0725 - - - - -
1.7533 23000 2.0663 - - - - -
1.8295 24000 2.0671 2.0715 0.5521 (+0.0117) 0.3339 (+0.0089) 0.6005 (+0.0999) 0.4955 (+0.0401)
1.9058 25000 2.0686 - - - - -
1.9820 26000 2.0685 - - - - -
2.0582 27000 2.068 - - - - -
2.1345 28000 2.0622 2.0723 0.5721 (+0.0317) 0.3509 (+0.0258) 0.5870 (+0.0863) 0.5033 (+0.0480)
2.2107 29000 2.0664 - - - - -
2.2869 30000 2.0616 - - - - -
2.3632 31000 2.0661 - - - - -
2.4394 32000 2.0638 2.0725 0.5620 (+0.0216) 0.3481 (+0.0230) 0.5899 (+0.0893) 0.5000 (+0.0447)
2.5156 33000 2.0643 - - - - -
2.5919 34000 2.0611 - - - - -
2.6681 35000 2.0609 - - - - -
2.7443 36000 2.0658 2.0720 0.5846 (+0.0441) 0.3569 (+0.0318) 0.5759 (+0.0752) 0.5058 (+0.0504)
2.8206 37000 2.066 - - - - -
2.8968 38000 2.0692 - - - - -
2.9730 39000 2.0692 - - - - -
-1 -1 - - 0.5840 (+0.0435) 0.3676 (+0.0425) 0.6431 (+0.1425) 0.5316 (+0.0762)
  • The bold row denotes the saved checkpoint.

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.519 kWh
  • Carbon Emitted: 0.202 kg of CO2
  • Hours Used: 1.659 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.48.3
  • PyTorch: 2.5.0+cu121
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ListNetLoss

@inproceedings{cao2007learning,
    title={Learning to rank: from pairwise approach to listwise approach},
    author={Cao, Zhe and Qin, Tao and Liu, Tie-Yan and Tsai, Ming-Feng and Li, Hang},
    booktitle={Proceedings of the 24th international conference on Machine learning},
    pages={129--136},
    year={2007}
}
Downloads last month
0
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for sentence-transformers library.

Model tree for tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-sigmoid-scale-10

Finetuned
(44)
this model

Dataset used to train tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-sigmoid-scale-10