Edit model card

SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L3-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-MiniLM-L3-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("justArmenian/legal_paraphrase")
# Run inference
sentences = [
    'In contrast, the scope of punishable conduct under the instant statute is limited by the individual\'s specified intent to "haras[s]" by communicating a "threat" so as to "engage in a knowing and willful course of conduct" directed at the victim that "alarms, torments, or terrorizes" the victim.',
    "The scope of punishable conduct under the statute is limited to the individual's intent to harass by communicating a threat so as to engage in a knowing and willful course of conduct directed at the victim that alarms, torments, or terrorizes the victim.",
    'The Veteran has been diagnosed with both major depressive disorder and PTSD.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 1.0
dot_accuracy 0.0
manhattan_accuracy 1.0
euclidean_accuracy 1.0
max_accuracy 1.0

Triplet

Metric Value
cosine_accuracy 1.0
dot_accuracy 0.0
manhattan_accuracy 1.0
euclidean_accuracy 1.0
max_accuracy 1.0

Triplet

Metric Value
cosine_accuracy 1.0
dot_accuracy 0.0
manhattan_accuracy 1.0
euclidean_accuracy 1.0
max_accuracy 1.0

Training Details

Training Dataset

Unnamed Dataset

  • Size: 2,000 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 36.01 tokens
    • max: 128 tokens
    • min: 8 tokens
    • mean: 31.41 tokens
    • max: 99 tokens
    • min: 8 tokens
    • mean: 31.39 tokens
    • max: 99 tokens
  • Samples:
    anchor positive negative
    The weight of the competent and probative medical opinions of record is against finding that the Veteran has a current diagnosis of PTSD or any other chronic acquired psychiatric disorder which is related to her military service. The weight of the credible and persuasive medical evidence on record suggests that the Veteran does not currently suffer from PTSD or any other chronic psychiatric condition related to her military service. It is evident that an unauthorized physical intrusion would have been deemed a "search" under the Fourth Amendment when it was originally formulated.
    We have no doubt that such a physical intrusion would have been considered a “search” within the meaning of the Fourth Amendment when it was adopted. It is evident that an unauthorized physical intrusion would have been deemed a "search" under the Fourth Amendment when it was originally formulated. In June 1972, the Veteran's condition was assessed by the Army Medical Board, which concluded that the Veteran's back condition made him unfit for active service, leading to his discharge from the military.
    Later in June 1972, the Veteran's condition was evaluated by the Army Medical Board, where it was determined that the Veteran's back condition rendered him physically unfit for active service, and he was subsequently discharged from service. In June 1972, the Veteran's condition was assessed by the Army Medical Board, which concluded that the Veteran's back condition made him unfit for active service, leading to his discharge from the military. The court has granted a petition for a writ of certiorari to review a decision made by the Court of Appeal of California, Fourth Appellate District, Division One.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 500 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 35.69 tokens
    • max: 128 tokens
    • min: 8 tokens
    • mean: 32.11 tokens
    • max: 77 tokens
    • min: 8 tokens
    • mean: 32.12 tokens
    • max: 77 tokens
  • Samples:
    anchor positive negative
    (Virginia v. Black, supra, 538 U.S. at p. 347.) The Black Court asserted that the "vagueness doctrine is a safeguard against the arbitrary exercise of power by government officials." This Court will determine if there was enough evidence to support the jury's verdict by considering whether reasonable people could have reached different conclusions based on the evidence presented.
    However, this Court will determine that there was sufficient evidence to sustain the jury's verdict if the evidence was "of such quality and weight that, having in mind the beyond a reasonable doubt burden of proof standard, reasonable fair-minded men in the exercise of impartial judgment might reach different conclusions on every element of the offense." This Court will determine if there was enough evidence to support the jury's verdict by considering whether reasonable people could have reached different conclusions based on the evidence presented. The VA psychiatrist believed that the Veteran was likely to have PTSD as a direct result of the attack on him during his military service in Korea.
    This VA psychiatrist opined that the Veteran had PTSD more likely than not to be the direct result of the attack on him during service in Korea. The VA psychiatrist believed that the Veteran was likely to have PTSD as a direct result of the attack on him during his military service in Korea. She noted that the Veteran's greatest source of stress was caring for their adult child without any assistance.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss all-nli-dev_max_accuracy all-nli-test_max_accuracy
0 0 - - 1.0 -
0.08 10 0.1402 0.0759 1.0 -
0.16 20 0.0873 0.0726 1.0 -
0.24 30 0.0992 0.0677 1.0 -
0.32 40 0.0759 0.0651 1.0 -
0.4 50 0.0355 0.0652 1.0 -
0.48 60 0.0814 0.0666 1.0 -
0.56 70 0.0353 0.0677 1.0 -
0.64 80 0.1404 0.0677 1.0 -
0.72 90 0.0336 0.0664 1.0 -
0.8 100 0.0559 0.0661 1.0 -
0.88 110 0.0484 0.0654 1.0 -
0.96 120 0.0522 0.0650 1.0 -
1.0 125 - - - 1.0

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
17.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for justArmenian/legal_paraphrase

Finetuned
(19)
this model

Evaluation results