BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'CUDAcast란 무엇인가요?',
    'CUDACast 시리즈에서는 어떤 주제를 다룰 예정인가요?',
    '이 게시물에 기여한 것으로 인정받은 사람은 누구입니까?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5443
cosine_accuracy@3 0.775
cosine_accuracy@5 0.8523
cosine_accuracy@10 0.9409
cosine_precision@1 0.5443
cosine_precision@3 0.2583
cosine_precision@5 0.1705
cosine_precision@10 0.0941
cosine_recall@1 0.5443
cosine_recall@3 0.775
cosine_recall@5 0.8523
cosine_recall@10 0.9409
cosine_ndcg@10 0.7411
cosine_mrr@10 0.6771
cosine_map@100 0.6802

Information Retrieval

Metric Value
cosine_accuracy@1 0.5387
cosine_accuracy@3 0.775
cosine_accuracy@5 0.8594
cosine_accuracy@10 0.9451
cosine_precision@1 0.5387
cosine_precision@3 0.2583
cosine_precision@5 0.1719
cosine_precision@10 0.0945
cosine_recall@1 0.5387
cosine_recall@3 0.775
cosine_recall@5 0.8594
cosine_recall@10 0.9451
cosine_ndcg@10 0.7414
cosine_mrr@10 0.676
cosine_map@100 0.6789

Information Retrieval

Metric Value
cosine_accuracy@1 0.5401
cosine_accuracy@3 0.7792
cosine_accuracy@5 0.8622
cosine_accuracy@10 0.9423
cosine_precision@1 0.5401
cosine_precision@3 0.2597
cosine_precision@5 0.1724
cosine_precision@10 0.0942
cosine_recall@1 0.5401
cosine_recall@3 0.7792
cosine_recall@5 0.8622
cosine_recall@10 0.9423
cosine_ndcg@10 0.7404
cosine_mrr@10 0.6756
cosine_map@100 0.6787

Information Retrieval

Metric Value
cosine_accuracy@1 0.5218
cosine_accuracy@3 0.7679
cosine_accuracy@5 0.8636
cosine_accuracy@10 0.9367
cosine_precision@1 0.5218
cosine_precision@3 0.256
cosine_precision@5 0.1727
cosine_precision@10 0.0937
cosine_recall@1 0.5218
cosine_recall@3 0.7679
cosine_recall@5 0.8636
cosine_recall@10 0.9367
cosine_ndcg@10 0.7306
cosine_mrr@10 0.6642
cosine_map@100 0.6672

Information Retrieval

Metric Value
cosine_accuracy@1 0.5091
cosine_accuracy@3 0.7426
cosine_accuracy@5 0.8284
cosine_accuracy@10 0.9311
cosine_precision@1 0.5091
cosine_precision@3 0.2475
cosine_precision@5 0.1657
cosine_precision@10 0.0931
cosine_recall@1 0.5091
cosine_recall@3 0.7426
cosine_recall@5 0.8284
cosine_recall@10 0.9311
cosine_ndcg@10 0.7136
cosine_mrr@10 0.6445
cosine_map@100 0.6474

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,397 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 11 tokens
    • mean: 48.46 tokens
    • max: 107 tokens
    • min: 9 tokens
    • mean: 21.0 tokens
    • max: 48 tokens
  • Samples:
    positive anchor
    Warp-stride 및 block-stride 루프는 스레드 동작을 재구성하고 공유 메모리 액세스 패턴을 최적화하는 데 사용되었습니다. 코드에서 공유 메모리 액세스 패턴을 최적화하기 위해 어떤 유형의 루프가 사용되었습니까?
    Nsight Compute의 규칙은 성능 병목 현상을 식별하기 위한 구조화된 프레임워크를 제공하고 최적화 프로세스를 간소화하기 위한 실행 가능한 통찰력을 제공합니다. Nsight Compute의 맥락에서 규칙이 중요한 이유는 무엇입니까?
    NVIDIA Nsight와 같은 도구의 가용성으로 인해 개발자가 단일 GPU에서 디버깅할 수 있게 되어 CUDA 개발 속도가 크게 향상되었습니다. CUDA 메모리 검사기는 메모리 액세스 문제를 식별하여 코드 품질을 향상시키는 데 도움이 됩니다. 디버깅 도구의 가용성이 CUDA 개발에 어떤 영향을 미쳤습니까?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.8 10 1.3103 - - - - -
0.96 12 - 0.6512 0.6539 0.6688 0.6172 0.6679
1.6 20 0.4148 - - - - -
2.0 25 - 0.6615 0.6688 0.6783 0.6417 0.6763
2.4 30 0.2683 - - - - -
2.88 36 - 0.6672 0.6787 0.6789 0.6474 0.6802
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.18.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
40
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for harheem/bge-m3-nvidia-ko-v1

Base model

BAAI/bge-m3
Finetuned
(214)
this model
Quantizations
1 model

Evaluation results