Edit model card

SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("MugheesAwan11/bge-base-citi-dataset-detailed-6k-0_5k-e2")
# Run inference
sentences = [
    ' and Arc Design is a registered service mark of Citigroup Inc. OpenInvestor is a service mark of Citigroup Inc. 1044398 GTS74053 0113 Trade Working Capital Viewpoints Navigating global uncertainty: Perspectives on supporting the healthcare supply chain November 2023 Treasury and Trade Solutions Foreword Foreword Since the inception of the COVID-19 pandemic, the healthcare industry has faced supply chain disruptions. The industry, which has a long tradition in innovation, continues to transform to meet the needs of an evolving environment. Pauline kXXXXX Unlocking the full potential within the healthcare industry Global Head, Trade requires continuous investment. As corporates plan for the Working Capital Advisory future, careful working capital management is essential to ensuring they get there. Andrew Betts Global head of TTS Trade Sales Client Management, Citi Bayo Gbowu Global Sector Lead, Trade Healthcare and Wellness Ian Kervick-Jimenez Trade Working Capital Advisory 2 Treasury and Trade Solutions The Working',
    'What are the registered service marks of Citigroup Inc?',
    'What is the role of DXX jXXXX US Real Estate Total Return SM Index in determining, composing or calculating products?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.4942
cosine_accuracy@3 0.6768
cosine_accuracy@5 0.7478
cosine_accuracy@10 0.8333
cosine_precision@1 0.4942
cosine_precision@3 0.2256
cosine_precision@5 0.1496
cosine_precision@10 0.0833
cosine_recall@1 0.4942
cosine_recall@3 0.6768
cosine_recall@5 0.7478
cosine_recall@10 0.8333
cosine_ndcg@10 0.6585
cosine_ndcg@100 0.6901
cosine_mrr@10 0.6032
cosine_map@100 0.6096

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,201 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 146 tokens
    • mean: 205.96 tokens
    • max: 289 tokens
    • min: 8 tokens
    • mean: 26.75 tokens
    • max: 241 tokens
  • Samples:
    positive anchor
    combined balances do not include: balances in delinquent accounts; balances that exceed your approved credit When Deposits Are Credited to an Account limit for any line of credit or credit card; or outstanding balances Deposits received before the end of a Business Day will be credited to your account that day. However, there been established for the Citigold Account Package. Your may be a delay before these funds are available for your use. See combined monthly balance range will be determined by computing the Funds Availability at Citibank section of this Marketplace an average of your monthly balances for your linked accounts Addendum for more information. during the prior calendar month. Monthly service fees are applied only to accounts with a combined average monthly balance range under the specified limits starting two statement cycles after account opening. Service fees assessed will appear as a charge on your next statement. 2 3 Combined Average Monthly Non- Per Special Circumstances Monthly Balance Service Citibank Check If a checking account is converted What are the conditions for balances to be included in the combined balances?
    the first six months, your credit score may not be where you want it just yet. There are other factors that impact your credit score including the length of your credit file, your credit mix and your credit utilization. If youre hoping to repair a credit score that has been damaged by financial setbacks, the timelines can be longer. A year or two with regular, timely payments and good credit utilization can push your credit score up. However, bankruptcies, collections, and late payments can linger on your credit report for anywhere from seven to ten years. That said, you may not have to use a secured credit card throughout your entire credit building process. Your goal may be to repair your credit to the point where your credit score is good enough to make you eligible for an unsecured credit card. To that end, youll need to research factors such as any fees that apply to the unsecured credit cards youre considering. There is no quick fix to having a great credit score. Building good credit with a What factors impact your credit score including the length of your credit file, your credit mix, and your credit utilization?
    by the index sponsor of the Constituents when it calculated the hypothetical back-tested index levels for the Constituents. It is impossible to predict whether the Index will rise or fall. The actual future performance of the Index may bear no relation to the historical or hypothetical back-tested levels of the Index. The Index Administrator, which is our Affiliate, and the Index Calculation Agent May Exercise Judgments under Certain Circumstances in the Calculation of the Index. Although the Index is rules- based, there are certain circumstances under which the Index Administrator or Index Calculation Agent may be required to exercise judgment in calculating the Index, including the following: The Index Administrator will determine whether an ambiguity, error or omission has arisen and the Index Administrator may resolve such ambiguity, error or omission, acting in good faith and in a commercially reasonable manner, and may amend the Index Rules to reflect the resolution of the ambiguity, error or omission in a manner that is consistent with the commercial objective of the Index. The Index Calculation Agents calculations What circumstances may require the Index Administrator or Index Calculation Agent to exercise judgment in calculating the Index?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768
        ],
        "matryoshka_weights": [
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_map@100
0.0515 10 0.7623 -
0.1031 20 0.6475 -
0.1546 30 0.4492 -
0.2062 40 0.3238 -
0.2577 50 0.2331 -
0.3093 60 0.2575 -
0.3608 70 0.3619 -
0.4124 80 0.1539 -
0.4639 90 0.1937 -
0.5155 100 0.241 -
0.5670 110 0.2192 -
0.6186 120 0.2553 -
0.6701 130 0.2438 -
0.7216 140 0.1916 -
0.7732 150 0.189 -
0.8247 160 0.1721 -
0.8763 170 0.2353 -
0.9278 180 0.1713 -
0.9794 190 0.2121 -
1.0 194 - 0.6100
1.0309 200 0.1394 -
1.0825 210 0.156 -
1.1340 220 0.1276 -
1.1856 230 0.0969 -
1.2371 240 0.0811 -
1.2887 250 0.0699 -
1.3402 260 0.0924 -
1.3918 270 0.0838 -
1.4433 280 0.064 -
1.4948 290 0.0624 -
1.5464 300 0.0837 -
1.5979 310 0.0881 -
1.6495 320 0.1065 -
1.7010 330 0.0646 -
1.7526 340 0.084 -
1.8041 350 0.0697 -
1.8557 360 0.0888 -
1.9072 370 0.0873 -
1.9588 380 0.0755 -
2.0 388 - 0.6096
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
4
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MugheesAwan11/bge-base-citi-dataset-detailed-6k-0_5k-e2

Finetuned
(247)
this model

Evaluation results