metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:500
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: >-
Non-context LLM: Prompt LLM directly with <atomic-fact> True or False?
without additional context.
Retrieval→LLM: Prompt with $k$ related passages retrieved from the
knowledge source as context.
Nonparametric probability (NP)): Compute the average likelihood of tokens
in the atomic fact by a masked LM and use that to make a prediction.
Retrieval→LLM + NP: Ensemble of two methods.
Some interesting observations on model hallucination behavior:
Error rates are higher for rarer entities in the task of biography
generation.
Error rates are higher for facts mentioned later in the generation.
Using retrieval to ground the model generation significantly helps reduce
hallucination.
sentences:
- >-
What is the impact of infrequent entities on the efficacy of language
models in the context of biography generation?
- >-
In what ways does FActScore enhance the assessment of factual accuracy
in long-form content generation when compared to conventional evaluation
techniques?
- >-
What approaches does SelfCheckGPT implement when faced with questions it
cannot answer, and how does this influence its overall reliability in
delivering accurate information?
- source_sentence: >-
Revision stage: Edit the output to correct content unsupported by evidence
while preserving the original content as much as possible. Initialize the
revised text $y=x$.
(1) Per $(q_i, e_{ij})$, an agreement model (via few-shot prompting + CoT,
$(y, q, e) \to {0,1}$) checks whether the evidence $e_i$ disagrees with
the current revised text $y$.
(2) Only if a disagreement is detect, the edit model (via few-shot
prompting + CoT, $(y, q, e) \to \text{ new }y$) outputs a new version of
$y$ that aims to agree with evidence $e_{ij}$ while otherwise minimally
altering $y$.
(3) Finally only a limited number $M=5$ of evidence goes into the
attribution report $A$.
Fig. 12. Illustration of RARR (Retrofit Attribution using Research and
Revision). (Image source: Gao et al. 2022)
When evaluating the revised text $y$, both attribution and preservation
metrics matter.
sentences:
- >-
What impact does adjusting the sampling temperature have on the
calibration of large language models, and how does this influence the
uncertainty of their outputs?
- >-
How do unanswerable questions differ from answerable ones in the context
of a language model's understanding of its own capabilities?
- >-
In what ways does the agreement model evaluate discrepancies between the
provided evidence and the updated text, and how does this evaluation
impact the reliability of AI-generated content modifications?
- source_sentence: >-
Non-context LLM: Prompt LLM directly with <atomic-fact> True or False?
without additional context.
Retrieval→LLM: Prompt with $k$ related passages retrieved from the
knowledge source as context.
Nonparametric probability (NP)): Compute the average likelihood of tokens
in the atomic fact by a masked LM and use that to make a prediction.
Retrieval→LLM + NP: Ensemble of two methods.
Some interesting observations on model hallucination behavior:
Error rates are higher for rarer entities in the task of biography
generation.
Error rates are higher for facts mentioned later in the generation.
Using retrieval to ground the model generation significantly helps reduce
hallucination.
sentences:
- >-
In what ways can the acknowledgment of uncertainty by large language
models (LLMs) contribute to the mitigation of hallucinations and enhance
the overall factual accuracy of generated content?
- >-
In what ways does the process of retrieving related passages contribute
to minimizing hallucinations in the outputs generated by language
models, and how does this approach differ from the application of
nonparametric probability methods?
- >-
How does the triplet structure $(c, y, y^*)$ play a crucial role in the
categorization of errors, and in what ways does it enhance the training
process of the editor model?
- source_sentence: >-
Fine-tuning New Knowledge#
Fine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a
common technique for improving certain capabilities of the model like
instruction following. Introducing new knowledge at the fine-tuning stage
is hard to avoid.
Fine-tuning usually consumes much less compute, making it debatable
whether the model can reliably learn new knowledge via small-scale
fine-tuning. Gekhman et al. 2024 studied the research question of whether
fine-tuning LLMs on new knowledge encourages hallucinations. They found
that (1) LLMs learn fine-tuning examples with new knowledge slower than
other examples with knowledge consistent with the pre-existing knowledge
of the model; (2) Once the examples with new knowledge are eventually
learned, they increase the model’s tendency to hallucinate.
sentences:
- >-
How do the intentionally designed questions in TruthfulQA highlight
prevalent misunderstandings regarding AI responses in the healthcare
domain?
- >-
What effect does the slower acquisition of new knowledge compared to
established knowledge have on the effectiveness of large language models
in practical scenarios?
- >-
How do the RARR methodology and the FAVA model compare in their
approaches to mitigating hallucination errors in generated outputs, and
what key distinctions can be identified between the two?
- source_sentence: >-
Revision stage: Edit the output to correct content unsupported by evidence
while preserving the original content as much as possible. Initialize the
revised text $y=x$.
(1) Per $(q_i, e_{ij})$, an agreement model (via few-shot prompting + CoT,
$(y, q, e) \to {0,1}$) checks whether the evidence $e_i$ disagrees with
the current revised text $y$.
(2) Only if a disagreement is detect, the edit model (via few-shot
prompting + CoT, $(y, q, e) \to \text{ new }y$) outputs a new version of
$y$ that aims to agree with evidence $e_{ij}$ while otherwise minimally
altering $y$.
(3) Finally only a limited number $M=5$ of evidence goes into the
attribution report $A$.
Fig. 12. Illustration of RARR (Retrofit Attribution using Research and
Revision). (Image source: Gao et al. 2022)
When evaluating the revised text $y$, both attribution and preservation
metrics matter.
sentences:
- >-
What mechanisms does the editing algorithm employ to maintain fidelity
to the source material while simultaneously ensuring alignment with the
supporting evidence?
- >-
What is the impact of constraining the dataset to a maximum of $M=5$
instances on the accuracy and reliability of the attribution report $A$
when analyzing AI-generated content?
- >-
In what ways does the implementation of a query generation model enhance
the research phase when it comes to validating the accuracy of
information?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.8802083333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96875
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9895833333333334
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8802083333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3229166666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19791666666666666
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8802083333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.96875
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9895833333333334
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9477255159324969
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9301711309523809
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.930171130952381
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96875
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9947916666666666
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3229166666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19895833333333335
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.96875
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9947916666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9459628876705072
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9277405753968253
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9277405753968253
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.8802083333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96875
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9947916666666666
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8802083333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3229166666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19895833333333335
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8802083333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.96875
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9947916666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9458393511377685
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9277405753968254
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9277405753968253
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.8697916666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.984375
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9895833333333334
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9947916666666666
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8697916666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.328125
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19791666666666666
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09947916666666667
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8697916666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.984375
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9895833333333334
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9947916666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9440191417149189
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9265252976190478
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.92687251984127
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.8541666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.984375
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9947916666666666
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9947916666666666
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8541666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.328125
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19895833333333335
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09947916666666667
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8541666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.984375
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9947916666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9947916666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9380774892768095
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9184027777777778
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9186111111111112
name: Cosine Map@100
BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("joshuapb/fine-tuned-matryoshka-500")
sentences = [
'Revision stage: Edit the output to correct content unsupported by evidence while preserving the original content as much as possible. Initialize the revised text $y=x$.\n\n(1) Per $(q_i, e_{ij})$, an agreement model (via few-shot prompting + CoT, $(y, q, e) \\to {0,1}$) checks whether the evidence $e_i$ disagrees with the current revised text $y$.\n(2) Only if a disagreement is detect, the edit model (via few-shot prompting + CoT, $(y, q, e) \\to \\text{ new }y$) outputs a new version of $y$ that aims to agree with evidence $e_{ij}$ while otherwise minimally altering $y$.\n(3) Finally only a limited number $M=5$ of evidence goes into the attribution report $A$.\n\n\n\n\nFig. 12. Illustration of RARR (Retrofit Attribution using Research and Revision). (Image source: Gao et al. 2022)\nWhen evaluating the revised text $y$, both attribution and preservation metrics matter.',
'What mechanisms does the editing algorithm employ to maintain fidelity to the source material while simultaneously ensuring alignment with the supporting evidence?',
'What is the impact of constraining the dataset to a maximum of $M=5$ instances on the accuracy and reliability of the attribution report $A$ when analyzing AI-generated content?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.8802 |
cosine_accuracy@3 |
0.9688 |
cosine_accuracy@5 |
0.9896 |
cosine_accuracy@10 |
1.0 |
cosine_precision@1 |
0.8802 |
cosine_precision@3 |
0.3229 |
cosine_precision@5 |
0.1979 |
cosine_precision@10 |
0.1 |
cosine_recall@1 |
0.8802 |
cosine_recall@3 |
0.9688 |
cosine_recall@5 |
0.9896 |
cosine_recall@10 |
1.0 |
cosine_ndcg@10 |
0.9477 |
cosine_mrr@10 |
0.9302 |
cosine_map@100 |
0.9302 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.875 |
cosine_accuracy@3 |
0.9688 |
cosine_accuracy@5 |
0.9948 |
cosine_accuracy@10 |
1.0 |
cosine_precision@1 |
0.875 |
cosine_precision@3 |
0.3229 |
cosine_precision@5 |
0.199 |
cosine_precision@10 |
0.1 |
cosine_recall@1 |
0.875 |
cosine_recall@3 |
0.9688 |
cosine_recall@5 |
0.9948 |
cosine_recall@10 |
1.0 |
cosine_ndcg@10 |
0.946 |
cosine_mrr@10 |
0.9277 |
cosine_map@100 |
0.9277 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.8802 |
cosine_accuracy@3 |
0.9688 |
cosine_accuracy@5 |
0.9948 |
cosine_accuracy@10 |
1.0 |
cosine_precision@1 |
0.8802 |
cosine_precision@3 |
0.3229 |
cosine_precision@5 |
0.199 |
cosine_precision@10 |
0.1 |
cosine_recall@1 |
0.8802 |
cosine_recall@3 |
0.9688 |
cosine_recall@5 |
0.9948 |
cosine_recall@10 |
1.0 |
cosine_ndcg@10 |
0.9458 |
cosine_mrr@10 |
0.9277 |
cosine_map@100 |
0.9277 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.8698 |
cosine_accuracy@3 |
0.9844 |
cosine_accuracy@5 |
0.9896 |
cosine_accuracy@10 |
0.9948 |
cosine_precision@1 |
0.8698 |
cosine_precision@3 |
0.3281 |
cosine_precision@5 |
0.1979 |
cosine_precision@10 |
0.0995 |
cosine_recall@1 |
0.8698 |
cosine_recall@3 |
0.9844 |
cosine_recall@5 |
0.9896 |
cosine_recall@10 |
0.9948 |
cosine_ndcg@10 |
0.944 |
cosine_mrr@10 |
0.9265 |
cosine_map@100 |
0.9269 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.8542 |
cosine_accuracy@3 |
0.9844 |
cosine_accuracy@5 |
0.9948 |
cosine_accuracy@10 |
0.9948 |
cosine_precision@1 |
0.8542 |
cosine_precision@3 |
0.3281 |
cosine_precision@5 |
0.199 |
cosine_precision@10 |
0.0995 |
cosine_recall@1 |
0.8542 |
cosine_recall@3 |
0.9844 |
cosine_recall@5 |
0.9948 |
cosine_recall@10 |
0.9948 |
cosine_ndcg@10 |
0.9381 |
cosine_mrr@10 |
0.9184 |
cosine_map@100 |
0.9186 |
Training Details
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epoch
per_device_eval_batch_size
: 16
learning_rate
: 2e-05
num_train_epochs
: 5
lr_scheduler_type
: cosine
warmup_ratio
: 0.1
load_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: False
do_predict
: False
eval_strategy
: epoch
prediction_loss_only
: True
per_device_train_batch_size
: 8
per_device_eval_batch_size
: 16
per_gpu_train_batch_size
: None
per_gpu_eval_batch_size
: None
gradient_accumulation_steps
: 1
eval_accumulation_steps
: None
learning_rate
: 2e-05
weight_decay
: 0.0
adam_beta1
: 0.9
adam_beta2
: 0.999
adam_epsilon
: 1e-08
max_grad_norm
: 1.0
num_train_epochs
: 5
max_steps
: -1
lr_scheduler_type
: cosine
lr_scheduler_kwargs
: {}
warmup_ratio
: 0.1
warmup_steps
: 0
log_level
: passive
log_level_replica
: warning
log_on_each_node
: True
logging_nan_inf_filter
: True
save_safetensors
: True
save_on_each_node
: False
save_only_model
: False
restore_callback_states_from_checkpoint
: False
no_cuda
: False
use_cpu
: False
use_mps_device
: False
seed
: 42
data_seed
: None
jit_mode_eval
: False
use_ipex
: False
bf16
: False
fp16
: False
fp16_opt_level
: O1
half_precision_backend
: auto
bf16_full_eval
: False
fp16_full_eval
: False
tf32
: None
local_rank
: 0
ddp_backend
: None
tpu_num_cores
: None
tpu_metrics_debug
: False
debug
: []
dataloader_drop_last
: False
dataloader_num_workers
: 0
dataloader_prefetch_factor
: None
past_index
: -1
disable_tqdm
: False
remove_unused_columns
: True
label_names
: None
load_best_model_at_end
: True
ignore_data_skip
: False
fsdp
: []
fsdp_min_num_params
: 0
fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap
: None
accelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed
: None
label_smoothing_factor
: 0.0
optim
: adamw_torch
optim_args
: None
adafactor
: False
group_by_length
: False
length_column_name
: length
ddp_find_unused_parameters
: None
ddp_bucket_cap_mb
: None
ddp_broadcast_buffers
: False
dataloader_pin_memory
: True
dataloader_persistent_workers
: False
skip_memory_metrics
: True
use_legacy_prediction_loop
: False
push_to_hub
: False
resume_from_checkpoint
: None
hub_model_id
: None
hub_strategy
: every_save
hub_private_repo
: False
hub_always_push
: False
gradient_checkpointing
: False
gradient_checkpointing_kwargs
: None
include_inputs_for_metrics
: False
eval_do_concat_batches
: True
fp16_backend
: auto
push_to_hub_model_id
: None
push_to_hub_organization
: None
mp_parameters
:
auto_find_batch_size
: False
full_determinism
: False
torchdynamo
: None
ray_scope
: last
ddp_timeout
: 1800
torch_compile
: False
torch_compile_backend
: None
torch_compile_mode
: None
dispatch_batches
: None
split_batches
: None
include_tokens_per_second
: False
include_num_input_tokens_seen
: False
neftune_noise_alpha
: None
optim_target_modules
: None
batch_eval_metrics
: False
eval_on_start
: False
batch_sampler
: batch_sampler
multi_dataset_batch_sampler
: proportional
Training Logs
Epoch |
Step |
Training Loss |
dim_128_cosine_map@100 |
dim_256_cosine_map@100 |
dim_512_cosine_map@100 |
dim_64_cosine_map@100 |
dim_768_cosine_map@100 |
0.0794 |
5 |
5.4149 |
- |
- |
- |
- |
- |
0.1587 |
10 |
4.8587 |
- |
- |
- |
- |
- |
0.2381 |
15 |
3.9711 |
- |
- |
- |
- |
- |
0.3175 |
20 |
3.4853 |
- |
- |
- |
- |
- |
0.3968 |
25 |
3.6227 |
- |
- |
- |
- |
- |
0.4762 |
30 |
3.3359 |
- |
- |
- |
- |
- |
0.5556 |
35 |
2.0868 |
- |
- |
- |
- |
- |
0.6349 |
40 |
2.256 |
- |
- |
- |
- |
- |
0.7143 |
45 |
2.2958 |
- |
- |
- |
- |
- |
0.7937 |
50 |
1.7128 |
- |
- |
- |
- |
- |
0.8730 |
55 |
2.029 |
- |
- |
- |
- |
- |
0.9524 |
60 |
1.9104 |
- |
- |
- |
- |
- |
1.0 |
63 |
- |
0.8950 |
0.9042 |
0.9039 |
0.8640 |
0.8989 |
1.0317 |
65 |
2.5929 |
- |
- |
- |
- |
- |
1.1111 |
70 |
1.4257 |
- |
- |
- |
- |
- |
1.1905 |
75 |
1.9956 |
- |
- |
- |
- |
- |
1.2698 |
80 |
1.5845 |
- |
- |
- |
- |
- |
1.3492 |
85 |
1.7383 |
- |
- |
- |
- |
- |
1.4286 |
90 |
1.4657 |
- |
- |
- |
- |
- |
1.5079 |
95 |
1.8461 |
- |
- |
- |
- |
- |
1.5873 |
100 |
1.8531 |
- |
- |
- |
- |
- |
1.6667 |
105 |
1.6504 |
- |
- |
- |
- |
- |
1.7460 |
110 |
2.7636 |
- |
- |
- |
- |
- |
1.8254 |
115 |
0.7195 |
- |
- |
- |
- |
- |
1.9048 |
120 |
1.2494 |
- |
- |
- |
- |
- |
1.9841 |
125 |
1.7331 |
- |
- |
- |
- |
- |
2.0 |
126 |
- |
0.9170 |
0.9340 |
0.9167 |
0.9013 |
0.9179 |
2.0635 |
130 |
1.1102 |
- |
- |
- |
- |
- |
2.1429 |
135 |
1.8586 |
- |
- |
- |
- |
- |
2.2222 |
140 |
1.4211 |
- |
- |
- |
- |
- |
2.3016 |
145 |
1.9531 |
- |
- |
- |
- |
- |
2.3810 |
150 |
1.9516 |
- |
- |
- |
- |
- |
2.4603 |
155 |
2.1174 |
- |
- |
- |
- |
- |
2.5397 |
160 |
1.7883 |
- |
- |
- |
- |
- |
2.6190 |
165 |
1.4537 |
- |
- |
- |
- |
- |
2.6984 |
170 |
1.3927 |
- |
- |
- |
- |
- |
2.7778 |
175 |
1.2559 |
- |
- |
- |
- |
- |
2.8571 |
180 |
1.8748 |
- |
- |
- |
- |
- |
2.9365 |
185 |
0.7509 |
- |
- |
- |
- |
- |
3.0 |
189 |
- |
0.9312 |
0.9244 |
0.9241 |
0.9199 |
0.9349 |
3.0159 |
190 |
0.947 |
- |
- |
- |
- |
- |
3.0952 |
195 |
1.9463 |
- |
- |
- |
- |
- |
3.1746 |
200 |
1.2077 |
- |
- |
- |
- |
- |
3.2540 |
205 |
0.7721 |
- |
- |
- |
- |
- |
3.3333 |
210 |
1.5633 |
- |
- |
- |
- |
- |
3.4127 |
215 |
1.5042 |
- |
- |
- |
- |
- |
3.4921 |
220 |
1.1531 |
- |
- |
- |
- |
- |
3.5714 |
225 |
1.2408 |
- |
- |
- |
- |
- |
3.6508 |
230 |
0.8085 |
- |
- |
- |
- |
- |
3.7302 |
235 |
1.1195 |
- |
- |
- |
- |
- |
3.8095 |
240 |
1.1843 |
- |
- |
- |
- |
- |
3.8889 |
245 |
0.7176 |
- |
- |
- |
- |
- |
3.9683 |
250 |
1.1715 |
- |
- |
- |
- |
- |
4.0 |
252 |
- |
0.9244 |
0.9287 |
0.9251 |
0.9199 |
0.9300 |
4.0476 |
255 |
1.3187 |
- |
- |
- |
- |
- |
4.1270 |
260 |
0.2891 |
- |
- |
- |
- |
- |
4.2063 |
265 |
1.5887 |
- |
- |
- |
- |
- |
4.2857 |
270 |
1.1227 |
- |
- |
- |
- |
- |
4.3651 |
275 |
1.5385 |
- |
- |
- |
- |
- |
4.4444 |
280 |
0.4732 |
- |
- |
- |
- |
- |
4.5238 |
285 |
1.2039 |
- |
- |
- |
- |
- |
4.6032 |
290 |
1.0755 |
- |
- |
- |
- |
- |
4.6825 |
295 |
1.5345 |
- |
- |
- |
- |
- |
4.7619 |
300 |
1.4255 |
- |
- |
- |
- |
- |
4.8413 |
305 |
1.7436 |
- |
- |
- |
- |
- |
4.9206 |
310 |
0.9408 |
- |
- |
- |
- |
- |
5.0 |
315 |
0.7724 |
0.9269 |
0.9277 |
0.9277 |
0.9186 |
0.9302 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}