SentenceTransformer based on BAAI/bge-base-en

This is a sentence-transformers model finetuned from BAAI/bge-base-en. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Explain how the "Time mul" and "PAN" controls in the Riff Machine options can affect a musical progression. Provide examples of scenarios where these controls would be particularly useful.',
    'Document_title: Riff Machine  \nFile_name: pianoroll_riff_prog.htm\nHeading_hierarchy: [Riff Machine -> Options]\nAnchor_id: [none]\nThese controls augment/modify the selected progression. Note that some controls will only have an effect if the original progression includes some variation in that parameter (PAN for example). • Time mul - Time multiplier, change the length of the notes. • PAN - Note panning multiplier. • VO\nL (Volume) - Note velocity multiplier. • MODX - Modulation X multiplier. • MODY - Modulation Y multiplier. • PITCH - Note pitch multiplier. • Absolute Pattern - On: any note slicing is based on the Piano roll grid. Off: Each note is sliced relative to its own start time. • Group notes - Groups any\nchopped-up notes, use the [grouping](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/toolbar_panels.htm#panel_shortcuticons_group) function on the [Toobar shortcut\nicons](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/toolbar_panels.htm#panel_shortcuticons) to activate note grouping. [Step 2. Chord Progression](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_chord.htm#Riff_Chord) [St\nep 3. Arpeggiation](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_arp.htm#Riff_Arp) [Step 4. Mirroring Notes](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_mirror.htm#Riff_Mirror) [Step 5. Levels &\nPanning](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_levels.htm#Riff_Levels) [Step 6. Articulation (note length)](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_art.htm#Riff_Art) [Step 7. Groove (note\ntiming)](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_groove.htm#Riff_Groove) [Step 8. Fit (note range)](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_fit.htm#Riff_Fit)',
    "Document_title: Layer Settings\nFile_name: chansettings_layer.htm\nHeading_hierarchy: [Layer Settings -> Options]\nAnchor_id: [none]\n• Levels Adjustment - This section contains controls for the volume (VOL) , panning (PAN) and Pitch of the linked layers. NOTE: The levels you set in the Layer Channel apply ONLY to the notes played through that layer. If you play a child of this Channel through \n        its own [Step Sequencer](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/channelrack.htm) dots or [Piano roll](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll.htm) , these settings will not be applied. • Layering section ➤ Set\nchildren - Assigns all selected Channels in the [Step Sequencer](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/channelrack.htm) as children in this Layer Channel. When you\n          play a note on the Layer Channel, all the children play along. To unassign a Channel from the Layer Channel, select all the Channels you want to remain \n          as children and press the Set children button again (all unselected Channels become unassigned for this Layer Channel). ➤ Show children - Selects all Channels that are children of this Layer Channel in the Step Sequencer, and deselects all other Channels. ➤ Random - OFF: All children of\nthe Layer Channel will sound on each note. ON: A single, random, Channel in the Layer will play. Use the 'Random' feature\n          to make more interesting percussion sounds, for example, by assigning many similar samples to each Channel in the Layer. This will give subtle variations on\n          each repeated note. ➤ Crossfade - ON: The Fade knob (below) will crossfade between two or more Channels in the Layer. ◆ Fade knob - Used to set the crossfade level in crossfade mode. For example; If you have 3 Layer Channels turning the Fade knob from left to right will \n          sound: Child 1 > Child 1 + Child 2 > Child 2 > Child 2 + Child 3 > Child 3 . Channels are faded from top (knob left) to bottom (knob right) in the Channel Rack. NOTE: Crossfading only works with\n          FL Studio native format plugins, it does not work with VST/AU  plugins. • Sequential - ON: Each Channel will play in turn (round-robbin style) starting with the highest Channel working to the lowest when the ' Set children ' function was used. NOTE: The system remembers the Channel\norder when 'Set children' was used. To re-order the sequence, rearrange your Channels and reapply 'Set children'. • Layering menu - Click on the small arrow at the top left of this panel you can access some additional commands: ➤ Split children - Splits the children of the Layer Channel across\nthe keyboard (starting with the root key of the Layer Channel), assigning each layer to a single key. The root keys of the children are automatically adjusted, so that the correct pitch is played through the Layer Channel. This feature is useful for creating drum kits or instruments where each\nnote has different sample. ➤ Reset children - Resets the range and root notes for all Child Channels of a layer. Basically undoes the ' Split children ' actions. ➤ Group children - Adds all children of the Layer Channel to a group (a popup window will appear to enter the name of the group). For\nmore information see the Channel Filtering section in the [Step Sequencer](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/channelrack.htm) page. ➤ Delete children - Removes selected children from the layer. • Preview Keyboard - The preview keyboard allows you\nto preview the Channel instrument (Left-clicking on the piano-keyboard), set the root key (Right-Click a key), and set key region (Left-click and drag on the ruler). See the [Miscellaneous Channel\nSettings](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/chansettings_misc.htm) page for more information on using the Preview Keyboard.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8475
cosine_accuracy@3 0.9499
cosine_accuracy@5 0.9708
cosine_accuracy@10 0.9833
cosine_precision@1 0.8475
cosine_precision@3 0.3166
cosine_precision@5 0.1942
cosine_precision@10 0.0983
cosine_recall@1 0.8475
cosine_recall@3 0.9499
cosine_recall@5 0.9708
cosine_recall@10 0.9833
cosine_ndcg@10 0.9211
cosine_mrr@10 0.9006
cosine_map@100 0.9013

Training Details

Training Dataset

Unnamed Dataset

  • Size: 5,776 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 33.33 tokens
    • max: 66 tokens
    • min: 37 tokens
    • mean: 278.56 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    Explain the issue that arises with project names in FL Studio 20 when using non-English characters, and describe the steps needed to resolve this issue on a Windows 10 system. Title: Projects names are not showing correctly. Names in non-english characters (Cyrillic Korean, Japanese, Chinese, Hindi, Thai, etc.).
    Answer: FL Studio 20 works in unicode and displays in Windows 10 automatically your local character set. However, for projects moved from older FL Studio program versions, FL Studio does not know the character set it needs to display. Language not set up correctly: Your FL Studio 20 program will look like this: Solution: correct language set up instructions: A. Please check this Windows support article: Follow the steps
    below to set up non-unicode language in windows 10 1. In search tab type "Region" and press enter. 2. In new window select "Administrative" 3. then click on "change system locale" 4. Select the language. B. Import your old projects again into FL Studio 20. The names will now show u...
    Discuss the importance of setting the correct language settings in FL Studio 20 for displaying project names accurately, especially when importing projects from older versions of the software. Title: Projects names are not showing correctly. Names in non-english characters (Cyrillic Korean, Japanese, Chinese, Hindi, Thai, etc.).
    Answer: FL Studio 20 works in unicode and displays in Windows 10 automatically your local character set. However, for projects moved from older FL Studio program versions, FL Studio does not know the character set it needs to display. Language not set up correctly: Your FL Studio 20 program will look like this: Solution: correct language set up instructions: A. Please check this Windows support article: Follow the steps
    below to set up non-unicode language in windows 10 1. In search tab type "Region" and press enter. 2. In new window select "Administrative" 3. then click on "change system locale" 4. Select the language. B. Import your old projects again into FL Studio 20. The names will now show u...
    How can you toggle the visibility of the FL Studio window when using it as a ReWire client within Cubase SX™? Document_title: Using FL Studio ReWire with Cubase SX™
    File_name: rewire_client_cubase.htm
    Heading_hierarchy: [Using FL Studio ReWire with Cubase SX™ -> 5. Toggle the FL Studio window visibility]
    Anchor_id: [none]
    Clicking the FL Studio icon toggles the visibility of the FL Studio window inside Cubase™. If you need to hide the FL Studio window, use the close button in the FL Studio window (this will not terminate the current session) or click the icon button
    on the FL Studio ReWire panel. To display the window later, click the icon button again.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 2
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss cosine_ndcg@10
0.0865 50 - 0.9006
0.1730 100 - 0.8944
0.2595 150 - 0.8924
0.3460 200 - 0.8996
0.4325 250 - 0.9016
0.5190 300 - 0.9021
0.6055 350 - 0.9071
0.6920 400 - 0.9112
0.7785 450 - 0.9132
0.8651 500 0.1068 0.9139
0.9516 550 - 0.9181
1.0 578 - 0.9153
1.0381 600 - 0.9126
1.1246 650 - 0.9156
1.2111 700 - 0.9150
1.2976 750 - 0.9161
1.3841 800 - 0.9159
1.4706 850 - 0.9189
1.5571 900 - 0.9174
1.6436 950 - 0.9206
1.7301 1000 0.0144 0.9185
1.8166 1050 - 0.9197
1.9031 1100 - 0.9211

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
170
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for austinpatrickm/finetuned_bge_embeddings_v4_base_v1.5

Base model

BAAI/bge-base-en
Finetuned
(8)
this model

Evaluation results