Edit model card

SentenceTransformer based on dunzhang/stella_en_1.5B_v5

This is a sentence-transformers model finetuned from dunzhang/stella_en_1.5B_v5. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: dunzhang/stella_en_1.5B_v5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model 
  (1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Dense({'in_features': 1536, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'QuestionSummary: Function Machines\nQuestion: Which of the following pairs of function machines are correct?\nCorrectAnswer: aRightarrowtimes2Rightarrow5Rightarrow2a5a \\Rightarrow \\times2  \\Rightarrow -5\\Rightarrow 2a-5 \n\n\\(a \\Rightarrow -5  \\Rightarrow \\times2\\Rightarrow 2(a-5)\\) \nAnswer: aRightarrowtimes2Rightarrow5Rightarrow2a5a \\Rightarrow \\times2  \\Rightarrow -5\\Rightarrow 2a-5 \n\n\\(a \\Rightarrow \\times2  \\Rightarrow -5\\Rightarrow 2(a-5)\\) ',
    'Does not follow the arrows through a function machine, changes the order of the operations asked.',
    'Incorrectly cancels what they believe is a factor in algebraic fractions',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@25 0.6946
cosine_precision@100 0.0088
cosine_precision@200 0.0047
cosine_precision@300 0.0032
cosine_precision@400 0.0024
cosine_precision@500 0.002
cosine_precision@600 0.0016
cosine_precision@700 0.0014
cosine_precision@800 0.0012
cosine_precision@900 0.0011
cosine_precision@1000 0.001
cosine_recall@100 0.876
cosine_recall@200 0.9369
cosine_recall@300 0.9587
cosine_recall@400 0.9736
cosine_recall@500 0.9805
cosine_recall@600 0.9862
cosine_recall@700 0.9931
cosine_recall@800 0.9954
cosine_recall@900 0.9977
cosine_recall@1000 0.9977
cosine_ndcg@25 0.3564
cosine_mrr@25 0.261
cosine_map@25 0.261
dot_accuracy@25 0.4271
dot_precision@100 0.0076
dot_precision@200 0.0043
dot_precision@300 0.0031
dot_precision@400 0.0024
dot_precision@500 0.0019
dot_precision@600 0.0016
dot_precision@700 0.0014
dot_precision@800 0.0012
dot_precision@900 0.0011
dot_precision@1000 0.001
dot_recall@100 0.76
dot_recall@200 0.8657
dot_recall@300 0.9231
dot_recall@400 0.9437
dot_recall@500 0.961
dot_recall@600 0.9713
dot_recall@700 0.9793
dot_recall@800 0.9839
dot_recall@900 0.9874
dot_recall@1000 0.9897
dot_ndcg@25 0.1953
dot_mrr@25 0.1329
dot_map@25 0.1329

Training Details

Training Dataset

Unnamed Dataset

  • Size: 3,999 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 30 tokens
    • mean: 87.03 tokens
    • max: 363 tokens
    • min: 4 tokens
    • mean: 13.84 tokens
    • max: 42 tokens
  • Samples:
    anchor positive
    QuestionSummary: Simplifying Algebraic Fractions
    Question: Simplify the following, if possible: ( \frac{m^{2}+2 m-3}{m-3} )
    CorrectAnswer: Does not simplify
    Answer: ( m+1 )
    Does not know that to factorise a quadratic expression, to find two numbers that add to give the coefficient of the x term, and multiply to give the non variable term
    QuestionSummary: Range and Interquartile Range from a List of Data
    Question: Tom and Katie are discussing the ( 5 ) plants with these heights:
    ( 24 \mathrm{cm}, 17 \mathrm{cm}, 42 \mathrm{cm}, 26 \mathrm{cm}, 13 \mathrm{cm} )
    Tom says if all the plants were cut in half, the range wouldn't change.
    Katie says if all the plants grew by ( 3 \mathrm{
    cm} ) each, the range wouldn't change.
    Who do you agree with?
    CorrectAnswer: Only
    Katie
    Answer: Only
    Tom
    Believes if you changed all values by the same proportion the range would not change
    QuestionSummary: Properties of Quadrilaterals
    Question: The angles highlighted on this rectangle with different length sides can never be... A rectangle with the diagonals drawn in. The angle on the right hand side at the centre is highlighted in red and the angle at the bottom at the centre is highlighted in yellow.
    CorrectAnswer: ( 90^{\circ} )
    Answer: acute
    Does not know the properties of a rectangle
  • Loss: CachedMultipleNegativesSymmetricRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "mini_batch_size": 1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 1372
  • per_device_eval_batch_size: 1372
  • learning_rate: 4e-05
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • save_only_model: True
  • bf16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 1372
  • per_device_eval_batch_size: 1372
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 4e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: True
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss val_cosine_map@25
0.3333 1 2.2717 0.1775
0.6667 2 2.1785 0.2300
1.0 3 1.4112 0.2651
1.3333 4 1.1861 0.2726
1.6667 5 0.8742 0.2813
2.0 6 0.8327 0.2818
2.3333 7 0.7626 0.2777
2.6667 8 0.5767 0.2752
3.0 9 0.493 0.2698
3.3333 10 0.5174 0.2654
3.6667 11 0.3906 0.2655
4.0 12 0.419 0.2627
4.3333 13 0.4394 0.2625
4.6667 14 0.5449 0.2612
5.0 15 0.3731 0.2610
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.1.1
  • Transformers: 4.45.1
  • PyTorch: 2.2.0
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.20.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
20
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ebinan92/dunzhang_stella_en_1.5B_v5

Finetuned
(10)
this model

Evaluation results