finetuned_arctic / README.md
Galatea007's picture
aa
da2d29a
metadata
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
  - dot_accuracy@1
  - dot_accuracy@3
  - dot_accuracy@5
  - dot_accuracy@10
  - dot_precision@1
  - dot_precision@3
  - dot_precision@5
  - dot_precision@10
  - dot_recall@1
  - dot_recall@3
  - dot_recall@5
  - dot_recall@10
  - dot_ndcg@10
  - dot_mrr@10
  - dot_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:600
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: What types of additional risks might future updates incorporate?
    sentences:
      - >-
        Inaccuracies in these labels can impact the “stability” or robustness of
        these benchmarks, which many GAI practitioners consider during the model
        selection process.
      - >-
        For example, when prompted to generate images of CEOs, doctors, lawyers,
        and judges, current text-to-image models underrepresent women and/or
        racial minorities , and people with disabilities .
      - >-
        Future updates may incorporate additional risks or provide further
        details on the risks identified below.
  - source_sentence: >-
      What are some potential consequences of the abuse and misuse of AI systems
      by humans?
    sentences:
      - >-
        Even when trained on “clean” data, increasingly capable GAI models can
        synthesize or produce synthetic NCII and CSAM.
      - >-
        3 the abuse, misuse, and unsafe repurposing by humans (adversarial or
        not ), and others result from interactions between a human and an AI
        system.
      - >-
        Energy and carbon emissions vary based on what is being done with the
        GAI model (i.e., pre -training, fine -tuning, inference), the modality of
        the content , hardware used, and type of task or application .
  - source_sentence: What types of digital content can be included in GAI?
    sentences:
      - >-
        Errors in t hird-party GAI components can also have downstream impacts
        on accuracy and robustness .
      - >-
        In direct prompt injections, attackers might craft malicious prompts and
        input them directly to a GAI system , with a variety of downstream
        negative consequences to interconnected systems.
      - >-
        This can include images, videos, audio, text, and other digital
        content.” While not all GAI is derived from foundation models, for
        purposes of this document, GAI generally refers to generative foundation
        models .
  - source_sentence: >-
      What are the implications of harmful bias and homogenization in relation
      to stereotypical content?
    sentences:
      - >-
        These risks provide a lens through which organizations can frame and
        execute risk management efforts.
      - >-
        13 • Not every suggested action appl ies to every AI Actor14 or is
        relevant to every AI Actor Task .
      - >-
        The spread of denigrating or stereotypical content can also further
        exacerbate representational harms (see Harmful Bias and Homogenization
        below).
  - source_sentence: >-
      What are the inventory exemptions defined in organizational policies for
      GAI systems embedded into application software?
    sentences:
      - >-
        Methods for creating smaller versions of train ed models, such as model
        distillation or compression, could reduce environmental impacts at
        inference time, but training and tuning such models may still contribute
        to their environmental impacts .
      - >-
        For example, predictive inferences made by GAI models based on PII or
        protected attributes c an contribute to adverse decisions , leading to
        representational or allocative harms to individuals or groups (see
        Harmful Bias and Homogenization below).
      - >-
        Information Security GV-1.6-002 Define any inventory exemptions in
        organizational policies for GAI systems embedded into application
        software .
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.98
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.99
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3266666666666667
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19799999999999998
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.98
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.99
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9563669441556807
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9417619047619047
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9417619047619047
            name: Cosine Map@100
          - type: dot_accuracy@1
            value: 0.9
            name: Dot Accuracy@1
          - type: dot_accuracy@3
            value: 0.98
            name: Dot Accuracy@3
          - type: dot_accuracy@5
            value: 0.99
            name: Dot Accuracy@5
          - type: dot_accuracy@10
            value: 1
            name: Dot Accuracy@10
          - type: dot_precision@1
            value: 0.9
            name: Dot Precision@1
          - type: dot_precision@3
            value: 0.3266666666666667
            name: Dot Precision@3
          - type: dot_precision@5
            value: 0.19799999999999998
            name: Dot Precision@5
          - type: dot_precision@10
            value: 0.09999999999999998
            name: Dot Precision@10
          - type: dot_recall@1
            value: 0.9
            name: Dot Recall@1
          - type: dot_recall@3
            value: 0.98
            name: Dot Recall@3
          - type: dot_recall@5
            value: 0.99
            name: Dot Recall@5
          - type: dot_recall@10
            value: 1
            name: Dot Recall@10
          - type: dot_ndcg@10
            value: 0.9563669441556807
            name: Dot Ndcg@10
          - type: dot_mrr@10
            value: 0.9417619047619047
            name: Dot Mrr@10
          - type: dot_map@100
            value: 0.9417619047619047
            name: Dot Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'What are the inventory exemptions defined in organizational policies for GAI systems embedded into application software?',
    'Information Security GV-1.6-002 Define any inventory exemptions in organizational policies for GAI systems embedded into application software .',
    'For example, predictive inferences made by GAI models based on PII or protected attributes c an contribute to adverse decisions , leading to representational or allocative harms to individuals or groups (see Harmful Bias and Homogenization below).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9
cosine_accuracy@3 0.98
cosine_accuracy@5 0.99
cosine_accuracy@10 1.0
cosine_precision@1 0.9
cosine_precision@3 0.3267
cosine_precision@5 0.198
cosine_precision@10 0.1
cosine_recall@1 0.9
cosine_recall@3 0.98
cosine_recall@5 0.99
cosine_recall@10 1.0
cosine_ndcg@10 0.9564
cosine_mrr@10 0.9418
cosine_map@100 0.9418
dot_accuracy@1 0.9
dot_accuracy@3 0.98
dot_accuracy@5 0.99
dot_accuracy@10 1.0
dot_precision@1 0.9
dot_precision@3 0.3267
dot_precision@5 0.198
dot_precision@10 0.1
dot_recall@1 0.9
dot_recall@3 0.98
dot_recall@5 0.99
dot_recall@10 1.0
dot_ndcg@10 0.9564
dot_mrr@10 0.9418
dot_map@100 0.9418

Training Details

Training Dataset

Unnamed Dataset

  • Size: 600 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 600 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 7 tokens
    • mean: 18.93 tokens
    • max: 33 tokens
    • min: 4 tokens
    • mean: 43.35 tokens
    • max: 165 tokens
  • Samples:
    sentence_0 sentence_1
    What are indirect prompt injections and how can they exploit vulnerabilities? Security researchers have already demonstrated how indirect prompt injections can exploit vulnerabilities by steal ing proprietary data or running malicious code remotely on a machine.
    What potential consequences can arise from exploiting vulnerabilities through indirect prompt injections? Security researchers have already demonstrated how indirect prompt injections can exploit vulnerabilities by steal ing proprietary data or running malicious code remotely on a machine.
    What factors might organizations consider when tailoring their measurement of GAI risks? Organizations may choose to tailor how they measure GAI risks based on these characteristics .
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
1.0 30 0.9216
1.6667 50 0.9292
2.0 60 0.9361
3.0 90 0.9418

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.1.1
  • Transformers: 4.45.0
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.20.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}