Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adarshheg/bge-base-financial-matryoshka")
# Run inference
sentences = [
    "During 2023, FedEx ranked 18th in FORTUNE magazine's 'World's Most Admired Companies' list and maintained its position as the highest-ranked delivery company on the list.",
    'What recognition did FedEx receive from FORTUNE magazine in 2023?',
    'What was the valuation allowance against deferred tax assets at the end of 2023, and what changes may affect its realization?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7767
cosine_accuracy@3 0.86
cosine_accuracy@5 0.89
cosine_accuracy@10 0.9333
cosine_precision@1 0.7767
cosine_precision@3 0.2867
cosine_precision@5 0.178
cosine_precision@10 0.0933
cosine_recall@1 0.7767
cosine_recall@3 0.86
cosine_recall@5 0.89
cosine_recall@10 0.9333
cosine_ndcg@10 0.852
cosine_mrr@10 0.8264
cosine_map@100 0.8286

Information Retrieval

Metric Value
cosine_accuracy@1 0.7567
cosine_accuracy@3 0.87
cosine_accuracy@5 0.8933
cosine_accuracy@10 0.9333
cosine_precision@1 0.7567
cosine_precision@3 0.29
cosine_precision@5 0.1787
cosine_precision@10 0.0933
cosine_recall@1 0.7567
cosine_recall@3 0.87
cosine_recall@5 0.8933
cosine_recall@10 0.9333
cosine_ndcg@10 0.8462
cosine_mrr@10 0.8183
cosine_map@100 0.8207

Information Retrieval

Metric Value
cosine_accuracy@1 0.76
cosine_accuracy@3 0.86
cosine_accuracy@5 0.89
cosine_accuracy@10 0.9267
cosine_precision@1 0.76
cosine_precision@3 0.2867
cosine_precision@5 0.178
cosine_precision@10 0.0927
cosine_recall@1 0.76
cosine_recall@3 0.86
cosine_recall@5 0.89
cosine_recall@10 0.9267
cosine_ndcg@10 0.8433
cosine_mrr@10 0.8167
cosine_map@100 0.8191

Information Retrieval

Metric Value
cosine_accuracy@1 0.7067
cosine_accuracy@3 0.84
cosine_accuracy@5 0.8633
cosine_accuracy@10 0.91
cosine_precision@1 0.7067
cosine_precision@3 0.28
cosine_precision@5 0.1727
cosine_precision@10 0.091
cosine_recall@1 0.7067
cosine_recall@3 0.84
cosine_recall@5 0.8633
cosine_recall@10 0.91
cosine_ndcg@10 0.8099
cosine_mrr@10 0.7776
cosine_map@100 0.781

Information Retrieval

Metric Value
cosine_accuracy@1 0.6833
cosine_accuracy@3 0.7933
cosine_accuracy@5 0.8367
cosine_accuracy@10 0.88
cosine_precision@1 0.6833
cosine_precision@3 0.2644
cosine_precision@5 0.1673
cosine_precision@10 0.088
cosine_recall@1 0.6833
cosine_recall@3 0.7933
cosine_recall@5 0.8367
cosine_recall@10 0.88
cosine_ndcg@10 0.7796
cosine_mrr@10 0.7476
cosine_map@100 0.7519

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,500 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 46.0 tokens
    • max: 239 tokens
    • min: 9 tokens
    • mean: 20.82 tokens
    • max: 42 tokens
  • Samples:
    positive anchor
    In the U.S., Visa Inc.'s total nominal payments volume increased by 17% from $4,725 billion in 2021 to $5,548 billion in 2022. What is the total percentage increase in Visa Inc.'s nominal payments volume in the U.S. from 2021 to 2022?
    The section titled 'Financial Wtatement and Supplementary Data' is labeled with the number 39 in the document. What is the numerical label associated with the section on Financial Statements and Supplementary Data in the document?
    The consolidated financial statements and accompanying notes are incorporated by reference herein. Are the consolidated financial statements and accompanying notes incorporated by reference in the Annual Report on Form 10-K?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.6809 2 0.7796 0.8153 0.8165 0.7375 0.8186
1.3617 4 0.781 0.8191 0.8207 0.7519 0.8286
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
15
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for adarshheg/bge-base-financial-matryoshka

Finetuned
this model

Evaluation results