LeoChiuu's picture
Add new SentenceTransformer model.
7a9cf71 verified
|
raw
history blame
14.6 kB
metadata
language: []
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:38688
  - loss:ContrastiveLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
widget:
  - source_sentence: >-
      There is a heavy cost for this service provided in conjunction with NOAA
      and SARSAT.
    sentences:
      - >-
        No significant changes have been made to the roadway except for its
        legal definition.
      - Some academics have questioned the ethics of these payments.
      - >-
        There is no charge for this service provided in conjunction with NOAA
        and SARSAT.
  - source_sentence: You're not thin.
    sentences:
      - This process is called low-dimensional embedded in machine learning.
      - You're thin.
      - Jean Prouvost was the founder of Marie Claire.
  - source_sentence: The lead man is charisma-free.
    sentences:
      - >-
        Fossil egg s are rare, but one oogenus, Polyclonoolithus, was discovered
        in the Hekou Group.
      - The roof is shingled, and topped by a small belfry.
      - The lead man doesn't have charisma.
  - source_sentence: >-
      Willis has criticized the rules adopted by the RNC, particularly Rules 12,
      16, and 40.
    sentences:
      - >-
        Willis has fully accepted the rules adopted by the RNC, particularly
        Rules 12, 16, and 40.
      - I can't stop reading.
      - This force acts on water independently of the wind stress.
  - source_sentence: The publication was named after Sir James Joynton Smith.
    sentences:
      - >-
        Detailed specific information on the ongoing validation activities is
        being made available in related publications.
      - On November 25, 2012, Tom O'Brien was reinstated.
      - >-
        The publication took its name from its founder and chief financer Sir
        James Joynton Smith.
pipeline_tag: sentence-similarity

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("LeoChiuu/all-MiniLM-L6-v2-negations")
# Run inference
sentences = [
    'The publication was named after Sir James Joynton Smith.',
    'The publication took its name from its founder and chief financer Sir James Joynton Smith.',
    "On November 25, 2012, Tom O'Brien was reinstated.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 38,688 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string int
    details
    • min: 5 tokens
    • mean: 15.94 tokens
    • max: 41 tokens
    • min: 5 tokens
    • mean: 15.96 tokens
    • max: 44 tokens
    • 0: ~48.50%
    • 1: ~51.50%
  • Samples:
    sentence_0 sentence_1 label
    No, that is impossible. No, that is not possible. 0
    The building did indeed serve as a hof, according to the bone finds. The bone finds thus indicate the building did indeed serve as a hof. 0
    The building became a pet shop. The building became a hospital. 1
  • Loss: ContrastiveLoss with these parameters:
    {
        "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
        "margin": 0.5,
        "size_average": true
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.2068 500 0.0353
0.4136 1000 0.0307
0.6203 1500 0.0234
0.8271 2000 0.0187
1.0339 2500 0.0152
1.2407 3000 0.0134
1.4475 3500 0.0123
1.6543 4000 0.0111
1.8610 4500 0.0107
2.0678 5000 0.0097
2.2746 5500 0.0096
2.4814 6000 0.0091
2.6882 6500 0.0087
2.8950 7000 0.0086
3.1017 7500 0.0075
3.3085 8000 0.008
3.5153 8500 0.0074
3.7221 9000 0.007
3.9289 9500 0.007
4.1356 10000 0.0063
4.3424 10500 0.0068
4.5492 11000 0.0061
4.7560 11500 0.0059
4.9628 12000 0.0056
5.1696 12500 0.0052
5.3763 13000 0.0055
5.5831 13500 0.0051
5.7899 14000 0.005
5.9967 14500 0.0047
6.2035 15000 0.0046
6.4103 15500 0.0047
6.6170 16000 0.0044
6.8238 16500 0.0044
7.0306 17000 0.0041
7.2374 17500 0.004
7.4442 18000 0.0044
7.6510 18500 0.0039
7.8577 19000 0.0038
8.0645 19500 0.0038
8.2713 20000 0.0037
8.4781 20500 0.0039
8.6849 21000 0.0037
8.8916 21500 0.0036
9.0984 22000 0.0034
9.3052 22500 0.0036
9.5120 23000 0.0035
9.7188 23500 0.0034
9.9256 24000 0.0035

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.2
  • PyTorch: 2.3.0+cpu
  • Accelerate: 0.32.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveLoss

@inproceedings{hadsell2006dimensionality,
    author={Hadsell, R. and Chopra, S. and LeCun, Y.},
    booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)}, 
    title={Dimensionality Reduction by Learning an Invariant Mapping}, 
    year={2006},
    volume={2},
    number={},
    pages={1735-1742},
    doi={10.1109/CVPR.2006.100}
}