Edit model card

BGE large Legal Spanish

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Language: es
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("dariolopez/bge-m3-es-legal-tmp-3")
# Run inference
sentences = [
    'Artículo 6. Definiciones. 1. Discriminación directa e indirecta. b) La discriminación indirecta se produce cuando una disposición, criterio o práctica aparentemente neutros ocasiona o puede ocasionar a una o varias personas una desventaja particular con respecto a otras por razón de las causas previstas en el apartado 1 del artículo 2.',
    '¿Qué se considera discriminación indirecta?',
    '¿Qué tipo de información se considera veraz?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5335
cosine_accuracy@3 0.7927
cosine_accuracy@5 0.8476
cosine_accuracy@10 0.8811
cosine_precision@1 0.5335
cosine_precision@3 0.2642
cosine_precision@5 0.1695
cosine_precision@10 0.0881
cosine_recall@1 0.5335
cosine_recall@3 0.7927
cosine_recall@5 0.8476
cosine_recall@10 0.8811
cosine_ndcg@10 0.7187
cosine_mrr@10 0.6652
cosine_map@100 0.6706

Information Retrieval

Metric Value
cosine_accuracy@1 0.5366
cosine_accuracy@3 0.7988
cosine_accuracy@5 0.8445
cosine_accuracy@10 0.8872
cosine_precision@1 0.5366
cosine_precision@3 0.2663
cosine_precision@5 0.1689
cosine_precision@10 0.0887
cosine_recall@1 0.5366
cosine_recall@3 0.7988
cosine_recall@5 0.8445
cosine_recall@10 0.8872
cosine_ndcg@10 0.722
cosine_mrr@10 0.6678
cosine_map@100 0.6725

Information Retrieval

Metric Value
cosine_accuracy@1 0.5396
cosine_accuracy@3 0.7988
cosine_accuracy@5 0.8415
cosine_accuracy@10 0.8841
cosine_precision@1 0.5396
cosine_precision@3 0.2663
cosine_precision@5 0.1683
cosine_precision@10 0.0884
cosine_recall@1 0.5396
cosine_recall@3 0.7988
cosine_recall@5 0.8415
cosine_recall@10 0.8841
cosine_ndcg@10 0.7235
cosine_mrr@10 0.6706
cosine_map@100 0.6753

Information Retrieval

Metric Value
cosine_accuracy@1 0.5488
cosine_accuracy@3 0.7866
cosine_accuracy@5 0.8201
cosine_accuracy@10 0.878
cosine_precision@1 0.5488
cosine_precision@3 0.2622
cosine_precision@5 0.164
cosine_precision@10 0.0878
cosine_recall@1 0.5488
cosine_recall@3 0.7866
cosine_recall@5 0.8201
cosine_recall@10 0.878
cosine_ndcg@10 0.7222
cosine_mrr@10 0.6713
cosine_map@100 0.6765

Information Retrieval

Metric Value
cosine_accuracy@1 0.5274
cosine_accuracy@3 0.7713
cosine_accuracy@5 0.8201
cosine_accuracy@10 0.8628
cosine_precision@1 0.5274
cosine_precision@3 0.2571
cosine_precision@5 0.164
cosine_precision@10 0.0863
cosine_recall@1 0.5274
cosine_recall@3 0.7713
cosine_recall@5 0.8201
cosine_recall@10 0.8628
cosine_ndcg@10 0.7052
cosine_mrr@10 0.6535
cosine_map@100 0.6594

Information Retrieval

Metric Value
cosine_accuracy@1 0.5061
cosine_accuracy@3 0.7378
cosine_accuracy@5 0.8018
cosine_accuracy@10 0.8598
cosine_precision@1 0.5061
cosine_precision@3 0.2459
cosine_precision@5 0.1604
cosine_precision@10 0.086
cosine_recall@1 0.5061
cosine_recall@3 0.7378
cosine_recall@5 0.8018
cosine_recall@10 0.8598
cosine_ndcg@10 0.6884
cosine_mrr@10 0.6329
cosine_map@100 0.6381

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 32
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 32
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.8649 10 1.5054 - - - - - - -
0.9514 11 - 0.8399 0.6684 0.6148 0.6574 0.6770 0.5281 0.6691
1.7297 20 1.0127 - - - - - - -
1.9892 23 - 0.5057 0.6757 0.6596 0.6715 0.6738 0.6017 0.6719
2.5946 30 0.5708 - - - - - - -
2.9405 34 - 0.4593 0.6781 0.6551 0.6795 0.6806 0.6165 0.6697
3.4595 40 0.2618 - - - - - - -
3.9784 46 - 0.4122 0.6787 0.6607 0.6842 0.6795 0.6227 0.6793
4.3243 50 0.1079 - - - - - - -
4.9297 57 - 0.3717 0.6827 0.6609 0.6810 0.6868 0.6277 0.6769
5.1892 60 0.0574 - - - - - - -
5.9676 69 - 0.3394 0.6824 0.6493 0.6777 0.6784 0.6344 0.6685
6.0541 70 0.0342 - - - - - - -
6.9189 80 0.0211 0.3379 0.6771 0.6627 0.6764 0.6766 0.6395 0.6723
7.7838 90 0.0136 - - - - - - -
7.9568 92 - 0.3128 0.6790 0.6536 0.6789 0.6782 0.6279 0.6730
8.6486 100 0.0087 - - - - - - -
8.9946 104 - 0.3163 0.6811 0.6542 0.6716 0.6744 0.6413 0.6758
9.5135 110 0.0073 - - - - - - -
9.9459 115 - 0.2937 0.6730 0.6569 0.6735 0.6747 0.6380 0.6710
10.3784 120 0.0049 - - - - - - -
10.9838 127 - 0.2927 0.6701 0.6578 0.6772 0.6724 0.6355 0.6738
11.2432 130 0.0044 - - - - - - -
11.9351 138 - 0.2837 0.6720 0.6558 0.6791 0.6752 0.6376 0.6783
12.1081 140 0.0035 - - - - - - -
12.9730 150 0.0031 0.2897 0.6746 0.6610 0.6708 0.6739 0.6375 0.6769
13.8378 160 0.0027 - - - - - - -
13.9243 161 - 0.2961 0.6733 0.6562 0.6692 0.6704 0.6402 0.6740
14.7027 170 0.0026 - - - - - - -
14.9622 173 - 0.2934 0.6734 0.6557 0.6720 0.6720 0.6368 0.6726
15.5676 180 0.0025 - - - - - - -
16.0 185 - 0.2932 0.6735 0.6561 0.6718 0.6744 0.6414 0.6773
16.4324 190 0.0023 - - - - - - -
16.9514 196 - 0.2912 0.6708 0.6582 0.6761 0.6794 0.6367 0.6753
17.2973 200 0.0021 - - - - - - -
17.9892 208 - 0.2925 0.6726 0.6582 0.6747 0.6773 0.6357 0.6737
18.1622 210 0.0022 - - - - - - -
18.9405 219 - 0.2965 0.6688 0.6563 0.6758 0.6769 0.6372 0.6765
19.0270 220 0.002 - - - - - - -
19.8919 230 0.0019 - - - - - - -
19.9784 231 - 0.3010 0.6697 0.6563 0.6768 0.6775 0.6380 0.6730
20.7568 240 0.0018 - - - - - - -
20.9297 242 - 0.3025 0.6728 0.6564 0.6764 0.6757 0.6367 0.6728
21.6216 250 0.0019 - - - - - - -
21.9676 254 - 0.3043 0.6707 0.6533 0.6733 0.6750 0.6352 0.6729
22.4865 260 0.0018 - - - - - - -
22.9189 265 - 0.3029 0.6706 0.6554 0.6734 0.6757 0.6355 0.6715
23.3514 270 0.0018 - - - - - - -
23.9568 277 - 0.3046 0.6706 0.6586 0.6733 0.6740 0.6383 0.6731
24.2162 280 0.0018 - - - - - - -
24.9946 289 - 0.3045 0.6722 0.6553 0.6740 0.6752 0.6364 0.6735
25.0811 290 0.0016 - - - - - - -
25.9459 300 0.0017 0.3061 0.6703 0.6564 0.6770 0.6736 0.6371 0.6724
26.8108 310 0.0016 - - - - - - -
26.9838 312 - 0.3023 0.6694 0.6581 0.6790 0.6771 0.6375 0.6731
27.6757 320 0.0015 - - - - - - -
27.9351 323 - 0.3035 0.6701 0.6585 0.6748 0.6787 0.6366 0.6729
28.5405 330 0.0016 - - - - - - -
28.9730 335 - 0.3017 0.6686 0.6568 0.6748 0.6710 0.6357 0.6713
29.4054 340 0.0016 - - - - - - -
29.9243 346 - 0.3043 0.6683 0.6549 0.6722 0.6762 0.6367 0.6712
30.2703 350 0.0017 - - - - - - -
30.4432 352 - 0.3056 0.6706 0.6594 0.6765 0.6753 0.6381 0.6725
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.3
  • PyTorch: 2.2.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dariolopez/bge-m3-es-legal-tmp-3

Base model

BAAI/bge-m3
Finetuned
(123)
this model

Evaluation results