metadata
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:38688
- loss:ContrastiveLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
widget:
- source_sentence: >-
There is a heavy cost for this service provided in conjunction with NOAA
and SARSAT.
sentences:
- >-
No significant changes have been made to the roadway except for its
legal definition.
- Some academics have questioned the ethics of these payments.
- >-
There is no charge for this service provided in conjunction with NOAA
and SARSAT.
- source_sentence: You're not thin.
sentences:
- This process is called low-dimensional embedded in machine learning.
- You're thin.
- Jean Prouvost was the founder of Marie Claire.
- source_sentence: The lead man is charisma-free.
sentences:
- >-
Fossil egg s are rare, but one oogenus, Polyclonoolithus, was discovered
in the Hekou Group.
- The roof is shingled, and topped by a small belfry.
- The lead man doesn't have charisma.
- source_sentence: >-
Willis has criticized the rules adopted by the RNC, particularly Rules 12,
16, and 40.
sentences:
- >-
Willis has fully accepted the rules adopted by the RNC, particularly
Rules 12, 16, and 40.
- I can't stop reading.
- This force acts on water independently of the wind stress.
- source_sentence: The publication was named after Sir James Joynton Smith.
sentences:
- >-
Detailed specific information on the ongoing validation activities is
being made available in related publications.
- On November 25, 2012, Tom O'Brien was reinstated.
- >-
The publication took its name from its founder and chief financer Sir
James Joynton Smith.
pipeline_tag: sentence-similarity
SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LeoChiuu/all-MiniLM-L6-v2-negations")
# Run inference
sentences = [
'The publication was named after Sir James Joynton Smith.',
'The publication took its name from its founder and chief financer Sir James Joynton Smith.',
"On November 25, 2012, Tom O'Brien was reinstated.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 38,688 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string int details - min: 5 tokens
- mean: 15.94 tokens
- max: 41 tokens
- min: 5 tokens
- mean: 15.96 tokens
- max: 44 tokens
- 0: ~48.50%
- 1: ~51.50%
- Samples:
sentence_0 sentence_1 label No, that is impossible.
No, that is not possible.
0
The building did indeed serve as a hof, according to the bone finds.
The bone finds thus indicate the building did indeed serve as a hof.
0
The building became a pet shop.
The building became a hospital.
1
- Loss:
ContrastiveLoss
with these parameters:{ "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE", "margin": 0.5, "size_average": true }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.2068 | 500 | 0.0353 |
0.4136 | 1000 | 0.0307 |
0.6203 | 1500 | 0.0234 |
0.8271 | 2000 | 0.0187 |
1.0339 | 2500 | 0.0152 |
1.2407 | 3000 | 0.0134 |
1.4475 | 3500 | 0.0123 |
1.6543 | 4000 | 0.0111 |
1.8610 | 4500 | 0.0107 |
2.0678 | 5000 | 0.0097 |
2.2746 | 5500 | 0.0096 |
2.4814 | 6000 | 0.0091 |
2.6882 | 6500 | 0.0087 |
2.8950 | 7000 | 0.0086 |
3.1017 | 7500 | 0.0075 |
3.3085 | 8000 | 0.008 |
3.5153 | 8500 | 0.0074 |
3.7221 | 9000 | 0.007 |
3.9289 | 9500 | 0.007 |
4.1356 | 10000 | 0.0063 |
4.3424 | 10500 | 0.0068 |
4.5492 | 11000 | 0.0061 |
4.7560 | 11500 | 0.0059 |
4.9628 | 12000 | 0.0056 |
5.1696 | 12500 | 0.0052 |
5.3763 | 13000 | 0.0055 |
5.5831 | 13500 | 0.0051 |
5.7899 | 14000 | 0.005 |
5.9967 | 14500 | 0.0047 |
6.2035 | 15000 | 0.0046 |
6.4103 | 15500 | 0.0047 |
6.6170 | 16000 | 0.0044 |
6.8238 | 16500 | 0.0044 |
7.0306 | 17000 | 0.0041 |
7.2374 | 17500 | 0.004 |
7.4442 | 18000 | 0.0044 |
7.6510 | 18500 | 0.0039 |
7.8577 | 19000 | 0.0038 |
8.0645 | 19500 | 0.0038 |
8.2713 | 20000 | 0.0037 |
8.4781 | 20500 | 0.0039 |
8.6849 | 21000 | 0.0037 |
8.8916 | 21500 | 0.0036 |
9.0984 | 22000 | 0.0034 |
9.3052 | 22500 | 0.0036 |
9.5120 | 23000 | 0.0035 |
9.7188 | 23500 | 0.0034 |
9.9256 | 24000 | 0.0035 |
Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.0.1
- Transformers: 4.40.2
- PyTorch: 2.3.0+cpu
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
ContrastiveLoss
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}