Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/sqv-5ep")
# Run inference
sentences = [
    'Import En cas de renovació per caducitat, pèrdua, sostracció o deteriorament: 12,00 € (en metàl·lic i preferiblement import exacte).',
    'Quin és el procediment per a la renovació del DNI en cas de sostracció?',
    "Quin és el paper del motiu legítim en l'oposició de dades personals en cas de motiu legítim i situació personal concreta?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.0407
cosine_accuracy@3 0.1174
cosine_accuracy@5 0.1815
cosine_accuracy@10 0.3302
cosine_precision@1 0.0407
cosine_precision@3 0.0391
cosine_precision@5 0.0363
cosine_precision@10 0.033
cosine_recall@1 0.0407
cosine_recall@3 0.1174
cosine_recall@5 0.1815
cosine_recall@10 0.3302
cosine_ndcg@10 0.158
cosine_mrr@10 0.1065
cosine_map@100 0.1279

Information Retrieval

Metric Value
cosine_accuracy@1 0.0391
cosine_accuracy@3 0.108
cosine_accuracy@5 0.1815
cosine_accuracy@10 0.3286
cosine_precision@1 0.0391
cosine_precision@3 0.036
cosine_precision@5 0.0363
cosine_precision@10 0.0329
cosine_recall@1 0.0391
cosine_recall@3 0.108
cosine_recall@5 0.1815
cosine_recall@10 0.3286
cosine_ndcg@10 0.1551
cosine_mrr@10 0.1033
cosine_map@100 0.1247

Information Retrieval

Metric Value
cosine_accuracy@1 0.0407
cosine_accuracy@3 0.1017
cosine_accuracy@5 0.1659
cosine_accuracy@10 0.3224
cosine_precision@1 0.0407
cosine_precision@3 0.0339
cosine_precision@5 0.0332
cosine_precision@10 0.0322
cosine_recall@1 0.0407
cosine_recall@3 0.1017
cosine_recall@5 0.1659
cosine_recall@10 0.3224
cosine_ndcg@10 0.1517
cosine_mrr@10 0.101
cosine_map@100 0.123

Information Retrieval

Metric Value
cosine_accuracy@1 0.0423
cosine_accuracy@3 0.1095
cosine_accuracy@5 0.1847
cosine_accuracy@10 0.3271
cosine_precision@1 0.0423
cosine_precision@3 0.0365
cosine_precision@5 0.0369
cosine_precision@10 0.0327
cosine_recall@1 0.0423
cosine_recall@3 0.1095
cosine_recall@5 0.1847
cosine_recall@10 0.3271
cosine_ndcg@10 0.1564
cosine_mrr@10 0.1054
cosine_map@100 0.1274

Information Retrieval

Metric Value
cosine_accuracy@1 0.0407
cosine_accuracy@3 0.1127
cosine_accuracy@5 0.18
cosine_accuracy@10 0.3146
cosine_precision@1 0.0407
cosine_precision@3 0.0376
cosine_precision@5 0.036
cosine_precision@10 0.0315
cosine_recall@1 0.0407
cosine_recall@3 0.1127
cosine_recall@5 0.18
cosine_recall@10 0.3146
cosine_ndcg@10 0.1518
cosine_mrr@10 0.1029
cosine_map@100 0.1261

Information Retrieval

Metric Value
cosine_accuracy@1 0.0407
cosine_accuracy@3 0.0986
cosine_accuracy@5 0.1596
cosine_accuracy@10 0.2911
cosine_precision@1 0.0407
cosine_precision@3 0.0329
cosine_precision@5 0.0319
cosine_precision@10 0.0291
cosine_recall@1 0.0407
cosine_recall@3 0.0986
cosine_recall@5 0.1596
cosine_recall@10 0.2911
cosine_ndcg@10 0.1405
cosine_mrr@10 0.0955
cosine_map@100 0.1194

Training Details

Training Dataset

Unnamed Dataset

  • Size: 5,750 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 4 tokens
    • mean: 43.32 tokens
    • max: 128 tokens
    • min: 9 tokens
    • mean: 20.77 tokens
    • max: 45 tokens
  • Samples:
    positive anchor
    Aquest tràmit permet donar d'alta ofertes de treball que es gestionaran pel Servei a l'Ocupació. Com puc saber si el meu perfil és compatible amb les ofertes de treball?
    El titular de l’activitat ha de declarar sota la seva responsabilitat, que compleix els requisits establerts per la normativa vigent per a l’exercici de l’activitat, que disposa d’un certificat tècnic justificatiu i que es compromet a mantenir-ne el compliment durant el seu exercici. Quin és el paper del titular de l'activitat en la Declaració responsable?
    Aquest tipus de transmissió entre cedent i cessionari només podrà ser de caràcter gratuït i no condicionada. Quin és el paper del cedent en la transmissió de drets funeraris?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.4444 10 4.5093 - - - - - -
0.8889 20 2.7989 - - - - - -
0.9778 22 - 0.1072 0.1182 0.1122 0.1083 0.1044 0.1082
1.3333 30 1.8343 - - - - - -
1.7778 40 1.5248 - - - - - -
2.0 45 - 0.1182 0.1203 0.1163 0.1188 0.1209 0.1229
2.2222 50 0.9624 - - - - - -
2.6667 60 1.1161 - - - - - -
2.9778 67 - 0.1235 0.1324 0.1302 0.1252 0.1213 0.1239
3.1111 70 0.7405 - - - - - -
3.5556 80 0.8621 - - - - - -
4.0 90 0.6071 0.1249 0.1282 0.1310 0.1280 0.1181 0.1278
4.4444 100 0.7091 - - - - - -
4.8889 110 0.606 0.1279 0.1261 0.1274 0.1230 0.1194 0.1247
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.35.0.dev0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/sqv-5ep

Base model

BAAI/bge-m3
Finetuned
(123)
this model

Evaluation results