SentenceTransformer based on BAAI/bge-base-en
This is a sentence-transformers model finetuned from BAAI/bge-base-en. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Explain how the "Time mul" and "PAN" controls in the Riff Machine options can affect a musical progression. Provide examples of scenarios where these controls would be particularly useful.',
'Document_title: Riff Machine \nFile_name: pianoroll_riff_prog.htm\nHeading_hierarchy: [Riff Machine -> Options]\nAnchor_id: [none]\nThese controls augment/modify the selected progression. Note that some controls will only have an effect if the original progression includes some variation in that parameter (PAN for example). • Time mul - Time multiplier, change the length of the notes. • PAN - Note panning multiplier. • VO\nL (Volume) - Note velocity multiplier. • MODX - Modulation X multiplier. • MODY - Modulation Y multiplier. • PITCH - Note pitch multiplier. • Absolute Pattern - On: any note slicing is based on the Piano roll grid. Off: Each note is sliced relative to its own start time. • Group notes - Groups any\nchopped-up notes, use the [grouping](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/toolbar_panels.htm#panel_shortcuticons_group) function on the [Toobar shortcut\nicons](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/toolbar_panels.htm#panel_shortcuticons) to activate note grouping. [Step 2. Chord Progression](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_chord.htm#Riff_Chord) [St\nep 3. Arpeggiation](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_arp.htm#Riff_Arp) [Step 4. Mirroring Notes](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_mirror.htm#Riff_Mirror) [Step 5. Levels &\nPanning](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_levels.htm#Riff_Levels) [Step 6. Articulation (note length)](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_art.htm#Riff_Art) [Step 7. Groove (note\ntiming)](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_groove.htm#Riff_Groove) [Step 8. Fit (note range)](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll_riff_fit.htm#Riff_Fit)',
"Document_title: Layer Settings\nFile_name: chansettings_layer.htm\nHeading_hierarchy: [Layer Settings -> Options]\nAnchor_id: [none]\n• Levels Adjustment - This section contains controls for the volume (VOL) , panning (PAN) and Pitch of the linked layers. NOTE: The levels you set in the Layer Channel apply ONLY to the notes played through that layer. If you play a child of this Channel through \n its own [Step Sequencer](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/channelrack.htm) dots or [Piano roll](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/pianoroll.htm) , these settings will not be applied. • Layering section ➤ Set\nchildren - Assigns all selected Channels in the [Step Sequencer](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/channelrack.htm) as children in this Layer Channel. When you\n play a note on the Layer Channel, all the children play along. To unassign a Channel from the Layer Channel, select all the Channels you want to remain \n as children and press the Set children button again (all unselected Channels become unassigned for this Layer Channel). ➤ Show children - Selects all Channels that are children of this Layer Channel in the Step Sequencer, and deselects all other Channels. ➤ Random - OFF: All children of\nthe Layer Channel will sound on each note. ON: A single, random, Channel in the Layer will play. Use the 'Random' feature\n to make more interesting percussion sounds, for example, by assigning many similar samples to each Channel in the Layer. This will give subtle variations on\n each repeated note. ➤ Crossfade - ON: The Fade knob (below) will crossfade between two or more Channels in the Layer. ◆ Fade knob - Used to set the crossfade level in crossfade mode. For example; If you have 3 Layer Channels turning the Fade knob from left to right will \n sound: Child 1 > Child 1 + Child 2 > Child 2 > Child 2 + Child 3 > Child 3 . Channels are faded from top (knob left) to bottom (knob right) in the Channel Rack. NOTE: Crossfading only works with\n FL Studio native format plugins, it does not work with VST/AU plugins. • Sequential - ON: Each Channel will play in turn (round-robbin style) starting with the highest Channel working to the lowest when the ' Set children ' function was used. NOTE: The system remembers the Channel\norder when 'Set children' was used. To re-order the sequence, rearrange your Channels and reapply 'Set children'. • Layering menu - Click on the small arrow at the top left of this panel you can access some additional commands: ➤ Split children - Splits the children of the Layer Channel across\nthe keyboard (starting with the root key of the Layer Channel), assigning each layer to a single key. The root keys of the children are automatically adjusted, so that the correct pitch is played through the Layer Channel. This feature is useful for creating drum kits or instruments where each\nnote has different sample. ➤ Reset children - Resets the range and root notes for all Child Channels of a layer. Basically undoes the ' Split children ' actions. ➤ Group children - Adds all children of the Layer Channel to a group (a popup window will appear to enter the name of the group). For\nmore information see the Channel Filtering section in the [Step Sequencer](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/channelrack.htm) page. ➤ Delete children - Removes selected children from the layer. • Preview Keyboard - The preview keyboard allows you\nto preview the Channel instrument (Left-clicking on the piano-keyboard), set the root key (Right-Click a key), and set key region (Left-click and drag on the ruler). See the [Miscellaneous Channel\nSettings](https://www.image-line.com/fl-studio-learning/fl-studio-online-manual/html/chansettings_misc.htm) page for more information on using the Preview Keyboard.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8475 |
cosine_accuracy@3 | 0.9499 |
cosine_accuracy@5 | 0.9708 |
cosine_accuracy@10 | 0.9833 |
cosine_precision@1 | 0.8475 |
cosine_precision@3 | 0.3166 |
cosine_precision@5 | 0.1942 |
cosine_precision@10 | 0.0983 |
cosine_recall@1 | 0.8475 |
cosine_recall@3 | 0.9499 |
cosine_recall@5 | 0.9708 |
cosine_recall@10 | 0.9833 |
cosine_ndcg@10 | 0.9211 |
cosine_mrr@10 | 0.9006 |
cosine_map@100 | 0.9013 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 5,776 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 33.33 tokens
- max: 66 tokens
- min: 37 tokens
- mean: 278.56 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 Explain the issue that arises with project names in FL Studio 20 when using non-English characters, and describe the steps needed to resolve this issue on a Windows 10 system.
Title: Projects names are not showing correctly. Names in non-english characters (Cyrillic Korean, Japanese, Chinese, Hindi, Thai, etc.).
Answer: FL Studio 20 works in unicode and displays in Windows 10 automatically your local character set. However, for projects moved from older FL Studio program versions, FL Studio does not know the character set it needs to display. Language not set up correctly: Your FL Studio 20 program will look like this: Solution: correct language set up instructions: A. Please check this Windows support article: Follow the steps
below to set up non-unicode language in windows 10 1. In search tab type "Region" and press enter. 2. In new window select "Administrative" 3. then click on "change system locale" 4. Select the language. B. Import your old projects again into FL Studio 20. The names will now show u...Discuss the importance of setting the correct language settings in FL Studio 20 for displaying project names accurately, especially when importing projects from older versions of the software.
Title: Projects names are not showing correctly. Names in non-english characters (Cyrillic Korean, Japanese, Chinese, Hindi, Thai, etc.).
Answer: FL Studio 20 works in unicode and displays in Windows 10 automatically your local character set. However, for projects moved from older FL Studio program versions, FL Studio does not know the character set it needs to display. Language not set up correctly: Your FL Studio 20 program will look like this: Solution: correct language set up instructions: A. Please check this Windows support article: Follow the steps
below to set up non-unicode language in windows 10 1. In search tab type "Region" and press enter. 2. In new window select "Administrative" 3. then click on "change system locale" 4. Select the language. B. Import your old projects again into FL Studio 20. The names will now show u...How can you toggle the visibility of the FL Studio window when using it as a ReWire client within Cubase SX™?
Document_title: Using FL Studio ReWire with Cubase SX™
File_name: rewire_client_cubase.htm
Heading_hierarchy: [Using FL Studio ReWire with Cubase SX™ -> 5. Toggle the FL Studio window visibility]
Anchor_id: [none]
Clicking the FL Studio icon toggles the visibility of the FL Studio window inside Cubase™. If you need to hide the FL Studio window, use the close button in the FL Studio window (this will not terminate the current session) or click the icon button
on the FL Studio ReWire panel. To display the window later, click the icon button again. - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 2multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | cosine_ndcg@10 |
---|---|---|---|
0.0865 | 50 | - | 0.9006 |
0.1730 | 100 | - | 0.8944 |
0.2595 | 150 | - | 0.8924 |
0.3460 | 200 | - | 0.8996 |
0.4325 | 250 | - | 0.9016 |
0.5190 | 300 | - | 0.9021 |
0.6055 | 350 | - | 0.9071 |
0.6920 | 400 | - | 0.9112 |
0.7785 | 450 | - | 0.9132 |
0.8651 | 500 | 0.1068 | 0.9139 |
0.9516 | 550 | - | 0.9181 |
1.0 | 578 | - | 0.9153 |
1.0381 | 600 | - | 0.9126 |
1.1246 | 650 | - | 0.9156 |
1.2111 | 700 | - | 0.9150 |
1.2976 | 750 | - | 0.9161 |
1.3841 | 800 | - | 0.9159 |
1.4706 | 850 | - | 0.9189 |
1.5571 | 900 | - | 0.9174 |
1.6436 | 950 | - | 0.9206 |
1.7301 | 1000 | 0.0144 | 0.9185 |
1.8166 | 1050 | - | 0.9197 |
1.9031 | 1100 | - | 0.9211 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 170
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for austinpatrickm/finetuned_bge_embeddings_v4_base_v1.5
Base model
BAAI/bge-base-enEvaluation results
- Cosine Accuracy@1 on Unknownself-reported0.847
- Cosine Accuracy@3 on Unknownself-reported0.950
- Cosine Accuracy@5 on Unknownself-reported0.971
- Cosine Accuracy@10 on Unknownself-reported0.983
- Cosine Precision@1 on Unknownself-reported0.847
- Cosine Precision@3 on Unknownself-reported0.317
- Cosine Precision@5 on Unknownself-reported0.194
- Cosine Precision@10 on Unknownself-reported0.098
- Cosine Recall@1 on Unknownself-reported0.847
- Cosine Recall@3 on Unknownself-reported0.950