jet-taekyo's picture
Add new SentenceTransformer model.
ef3de2b verified
metadata
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
  - dot_accuracy@1
  - dot_accuracy@3
  - dot_accuracy@5
  - dot_accuracy@10
  - dot_precision@1
  - dot_precision@3
  - dot_precision@5
  - dot_precision@10
  - dot_recall@1
  - dot_recall@3
  - dot_recall@5
  - dot_recall@10
  - dot_ndcg@10
  - dot_mrr@10
  - dot_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:714
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: What are some examples of data privacy issues mentioned in the context?
    sentences:
      - >-
        on a principle of local control, such that those individuals closest to
        the data subject have more access while 

        those who are less proximate do not (e.g., a teacher has access to their
        students’ daily progress data while a 

        superintendent does not). 

        Reporting. In addition to the reporting on data privacy (as listed above
        for non-sensitive data), entities devel-

        oping technologies related to a sensitive domain and those collecting,
        using, storing, or sharing sensitive data 

        should, whenever appropriate, regularly provide public reports
        describing: any data security lapses or breaches 

        that resulted in sensitive data leaks; the number, type, and outcomes of
        ethical pre-reviews undertaken; a 

        description of any data sold, shared, or made public, and how that data
        was assessed to determine it did not pres-

        ent a sensitive data risk; and ongoing risk identification and
        management procedures, and any mitigation added
      - >-
        DATA PRIVACY 

        HOW THESE PRINCIPLES CAN MOVE INTO PRACTICE

        Real-life examples of how these principles can become reality, through
        laws, policies, and practical 

        technical and sociotechnical approaches to protecting rights,
        opportunities, and access. 

        The Privacy Act of 1974 requires privacy protections for personal
        information in federal 

        records systems, including limits on data retention, and also provides
        individuals a general 

        right to access and correct their data. Among other things, the Privacy
        Act limits the storage of individual 

        information in federal systems of records, illustrating the principle of
        limiting the scope of data retention. Under 

        the Privacy Act, federal agencies may only retain data about an
        individual that is “relevant and necessary” to 

        accomplish an agency’s statutory purpose or to comply with an Executive
        Order of the President. The law allows
      - >-
        DATA PRIVACY 

        WHY THIS PRINCIPLE IS IMPORTANT

        This section provides a brief summary of the problems which the
        principle seeks to address and protect 

        against, including illustrative examples. 

        

        An insurer might collect data from a person's social media presence as
        part of deciding what life

        insurance rates they should be offered.64

        

        A data broker harvested large amounts of personal data and then suffered
        a breach, exposing hundreds of

        thousands of people to potential identity theft. 65

        

        A local public housing authority installed a facial recognition system
        at the entrance to housing complexes to

        assist law enforcement with identifying individuals viewed via camera
        when police reports are filed, leading

        the community, both those living in the housing complex and not, to have
        videos of them sent to the local

        police department and made available for scanning by its facial
        recognition software.66

        
  - source_sentence: >-
      What are the main topics covered in the National Institute of Standards
      and Technology's AI Risk Management Framework?
    sentences:
      - >-
        https://www.rand.org/pubs/research_reports/RRA2977-2.html. 

        Nicoletti, L. et al. (2023) Humans Are Biased. Generative Ai Is Even
        Worse. Bloomberg. 

        https://www.bloomberg.com/graphics/2023-generative-ai-bias/. 

        National Institute of Standards and Technology (2024) Adversarial
        Machine Learning: A Taxonomy and 

        Terminology of Attacks and Mitigations
        https://csrc.nist.gov/pubs/ai/100/2/e2023/final 

        National Institute of Standards and Technology (2023) AI Risk Management
        Framework. 

        https://www.nist.gov/itl/ai-risk-management-framework 

        National Institute of Standards and Technology (2023) AI Risk Management
        Framework, Chapter 3: AI 

        Risks and Trustworthiness. 

        https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics 

        National Institute of Standards and Technology (2023) AI Risk Management
        Framework, Chapter 6: AI 

        RMF Profiles.
        https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Core_And_Profiles/6-sec-profile
      - >-
        (e.g., via red-teaming, field testing, participatory engagements,
        performance 

        assessments, user feedback mechanisms). 

        Human-AI Configuration 

        AI Actor Tasks: AI Development, AI Deployment, AI Impact Assessment,
        Operation and Monitoring 
         
        MANAGE 2.2: Mechanisms are in place and applied to sustain the value of
        deployed AI systems. 

        Action ID 

        Suggested Action 

        GAI Risks 

        MG-2.2-001 

        Compare GAI system outputs against pre-defined organization risk
        tolerance, 

        guidelines, and principles, and review and test AI-generated content
        against 

        these guidelines. 

        CBRN Information or Capabilities; 

        Obscene, Degrading, and/or 

        Abusive Content; Harmful Bias and 

        Homogenization; Dangerous, 

        Violent, or Hateful Content 

        MG-2.2-002 

        Document training data sources to trace the origin and provenance of AI-

        generated content. 

        Information Integrity 

        MG-2.2-003 

        Evaluate feedback loops between GAI system content provenance and human
      - >-
        domain or for functions that are required for administrative reasons
        (e.g., school attendance records), unless 

        consent is acquired, if appropriate, and the additional expectations in
        this section are met. Consent for non-

        necessary functions should be optional, i.e., should not be required,
        incentivized, or coerced in order to 

        receive opportunities or access to services. In cases where data is
        provided to an entity (e.g., health insurance 

        company) in order to facilitate payment for such a need, that data
        should only be used for that purpose. 

        Ethical review and use prohibitions. Any use of sensitive data or
        decision process based in part on sensi-

        tive data that might limit rights, opportunities, or access, whether the
        decision is automated or not, should go 

        through a thorough ethical review and monitoring, both in advance and by
        periodic review (e.g., via an indepen-

        dent ethics committee or similarly robust process). In some cases, this
        ethical review may determine that data
  - source_sentence: >-
      How can organizations leverage user feedback to enhance content provenance
      and risk management efforts?
    sentences:
      - >-
        tested, there will always be situations for which the system fails. The
        American public deserves protection via human 

        review against these outlying or unexpected scenarios. In the case of
        time-critical systems, the public should not have 

        to wait—immediate human consideration and fallback should be available.
        In many time-critical systems, such a 

        remedy is already immediately available, such as a building manager who
        can open a door in the case an automated 

        card access system fails. 

        In the criminal justice system, employment, education, healthcare, and
        other sensitive domains, automated systems 

        are used for many purposes, from pre-trial risk assessments and parole
        decisions to technologies that help doctors 

        diagnose disease. Absent appropriate safeguards, these technologies can
        lead to unfair, inaccurate, or dangerous 

        outcomes. These sensitive domains require extra protections. It is
        critically important that there is extensive human 

        oversight in such settings.
      - >-
        enable organizations to maximize the utility of provenance data and risk
        management efforts. 

        A.1.7. Enhancing Content Provenance through Structured Public Feedback 

        While indirect feedback methods such as automated error collection
        systems are useful, they often lack 

        the context and depth that direct input from end users can provide.
        Organizations can leverage feedback 

        approaches described in the Pre-Deployment Testing section to capture
        input from external sources such 

        as through AI red-teaming.  

        Integrating pre- and post-deployment external feedback into the
        monitoring process for GAI models and 

        corresponding applications can help enhance awareness of performance
        changes and mitigate potential 

        risks and harms from outputs. There are many ways to capture and make
        use of user feedback  before 

        and after GAI systems and digital content transparency approaches are
        deployed  to gain insights about
      - >-
        A.1. Governance 

        A.1.1. Overview 

        Like any other technology system, governance principles and techniques
        can be used to manage risks 

        related to generative AI models, capabilities, and applications.
        Organizations may choose to apply their 

        existing risk tiering to GAI systems, or they may opt to revise or
        update AI system risk levels to address 

        these unique GAI risks. This section describes how organizational
        governance regimes may be re-

        evaluated and adjusted for GAI contexts. It also addresses third-party
        considerations for governing across 

        the AI value chain.  

        A.1.2. Organizational Governance 

        GAI opportunities, risks and long-term performance characteristics are
        typically less well-understood 

        than non-generative AI tools and may be perceived and acted upon by
        humans in ways that vary greatly. 

        Accordingly, GAI may call for different levels of oversight from AI
        Actors or different human-AI
  - source_sentence: >-
      What should be ensured for users who have trouble with the automated
      system?
    sentences:
      - >-
        32 

        MEASURE 2.6: The AI system is evaluated regularly for safety risks  as
        identified in the MAP function. The AI system to be 

        deployed is demonstrated to be safe, its residual negative risk does not
        exceed the risk tolerance, and it can fail safely, particularly if 

        made to operate beyond its knowledge limits. Safety metrics reflect
        system reliability and robustness, real-time monitoring, and 

        response times for AI system failures. 

        Action ID 

        Suggested Action 

        GAI Risks 

        MS-2.6-001 

        Assess adverse impacts, including health and wellbeing impacts for value
        chain 

        or other AI Actors that are exposed to sexually explicit, offensive, or
        violent 

        information during GAI training and maintenance. 

        Human-AI Configuration; Obscene, 

        Degrading, and/or Abusive 

        Content; Value Chain and 

        Component Integration; 

        Dangerous, Violent, or Hateful 

        Content 

        MS-2.6-002 

        Assess existence or levels of harmful bias, intellectual property
        infringement,
      - >-
        APPENDIX

        Systems that impact the safety of communities such as automated traffic
        control systems, elec 

        -ctrical grid controls, smart city technologies, and industrial
        emissions and environmental

        impact control algorithms; and

        Systems related to access to benefits or services or assignment of
        penalties such as systems that

        support decision-makers who adjudicate benefits such as collating or
        analyzing information or

        matching records, systems which similarly assist in the adjudication of
        administrative or criminal

        penalties, fraud detection algorithms, services or benefits access
        control algorithms, biometric

        systems used as access control, and systems which make benefits or
        services related decisions on a

        fully or partially autonomous basis (such as a determination to revoke
        benefits).

        54
      - >-
        meaningfully impact rights, opportunities, or access should have greater
        availability (e.g., staffing) and over­

        sight of human consideration and fallback mechanisms. 

        Accessible. Mechanisms for human consideration and fallback, whether
        in-person, on paper, by phone, or 

        otherwise provided, should be easy to find and use. These mechanisms
        should be tested to ensure that users 

        who have trouble with the automated system are able to use human
        consideration and fallback, with the under­

        standing that it may be these users who are most likely to need the
        human assistance. Similarly, it should be 

        tested to ensure that users with disabilities are able to find and use
        human consideration and fallback and also 

        request reasonable accommodations or modifications. 

        Convenient. Mechanisms for human consideration and fallback should not
        be unreasonably burdensome as 

        compared to the automated system’s equivalent. 

        49
  - source_sentence: >-
      What must lenders provide to consumers who are denied credit under the
      Fair Credit Reporting Act?
    sentences:
      - >-
        8 

        Trustworthy AI Characteristics: Accountable and Transparent, Privacy
        Enhanced, Safe, Secure and 

        Resilient 

        2.5. Environmental Impacts 

        Training, maintaining, and operating (running inference on) GAI systems
        are resource-intensive activities, 

        with potentially large energy and environmental footprints. Energy and
        carbon emissions vary based on 

        what is being done with the GAI model (i.e., pre-training, fine-tuning,
        inference), the modality of the 

        content, hardware used, and type of task or application. 

        Current estimates suggest that training a single transformer LLM can
        emit as much carbon as 300 round-

        trip flights between San Francisco and New York. In a study comparing
        energy consumption and carbon 

        emissions for LLM inference, generative tasks (e.g., text summarization)
        were found to be more energy- 

        and carbon-intensive than discriminative or non-generative tasks (e.g.,
        text classification).
      - >-
        that consumers who are denied credit receive "adverse action" notices.
        Anyone who relies on the information in a 

        credit report to deny a consumer credit must, under the Fair Credit
        Reporting Act, provide an "adverse action" 

        notice to the consumer, which includes "notice of the reasons a creditor
        took adverse action on the application 

        or on an existing credit account."90 In addition, under the risk-based
        pricing rule,91 lenders must either inform 

        borrowers of their credit score, or else tell consumers when "they are
        getting worse terms because of 

        information in their credit report." The CFPB has also asserted that
        "[t]he law gives every applicant the right to 

        a specific explanation if their application for credit was denied, and
        that right is not diminished simply because 

        a company uses a complex algorithm that it doesn't understand."92 Such
        explanations illustrate a shared value 

        that certain decisions need to be explained.
      - >-
        measures to prevent, flag, or take other action in response to outputs
        that 

        reproduce particular training data (e.g., plagiarized, trademarked,
        patented, 

        licensed content or trade secret material). 

        Intellectual Property; CBRN 

        Information or Capabilities
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.881578947368421
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9671052631578947
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9868421052631579
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.881578947368421
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3223684210526316
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19736842105263155
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999999
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.881578947368421
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9671052631578947
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9868421052631579
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9460063349721777
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9282346491228071
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9282346491228068
            name: Cosine Map@100
          - type: dot_accuracy@1
            value: 0.881578947368421
            name: Dot Accuracy@1
          - type: dot_accuracy@3
            value: 0.9671052631578947
            name: Dot Accuracy@3
          - type: dot_accuracy@5
            value: 0.9868421052631579
            name: Dot Accuracy@5
          - type: dot_accuracy@10
            value: 1
            name: Dot Accuracy@10
          - type: dot_precision@1
            value: 0.881578947368421
            name: Dot Precision@1
          - type: dot_precision@3
            value: 0.3223684210526316
            name: Dot Precision@3
          - type: dot_precision@5
            value: 0.19736842105263155
            name: Dot Precision@5
          - type: dot_precision@10
            value: 0.09999999999999999
            name: Dot Precision@10
          - type: dot_recall@1
            value: 0.881578947368421
            name: Dot Recall@1
          - type: dot_recall@3
            value: 0.9671052631578947
            name: Dot Recall@3
          - type: dot_recall@5
            value: 0.9868421052631579
            name: Dot Recall@5
          - type: dot_recall@10
            value: 1
            name: Dot Recall@10
          - type: dot_ndcg@10
            value: 0.9460063349721777
            name: Dot Ndcg@10
          - type: dot_mrr@10
            value: 0.9282346491228071
            name: Dot Mrr@10
          - type: dot_map@100
            value: 0.9282346491228068
            name: Dot Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("jet-taekyo/snowflake_finetuned_recursive")
# Run inference
sentences = [
    'What must lenders provide to consumers who are denied credit under the Fair Credit Reporting Act?',
    'that consumers who are denied credit receive "adverse action" notices. Anyone who relies on the information in a \ncredit report to deny a consumer credit must, under the Fair Credit Reporting Act, provide an "adverse action" \nnotice to the consumer, which includes "notice of the reasons a creditor took adverse action on the application \nor on an existing credit account."90 In addition, under the risk-based pricing rule,91 lenders must either inform \nborrowers of their credit score, or else tell consumers when "they are getting worse terms because of \ninformation in their credit report." The CFPB has also asserted that "[t]he law gives every applicant the right to \na specific explanation if their application for credit was denied, and that right is not diminished simply because \na company uses a complex algorithm that it doesn\'t understand."92 Such explanations illustrate a shared value \nthat certain decisions need to be explained.',
    'measures to prevent, flag, or take other action in response to outputs that \nreproduce particular training data (e.g., plagiarized, trademarked, patented, \nlicensed content or trade secret material). \nIntellectual Property; CBRN \nInformation or Capabilities',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8816
cosine_accuracy@3 0.9671
cosine_accuracy@5 0.9868
cosine_accuracy@10 1.0
cosine_precision@1 0.8816
cosine_precision@3 0.3224
cosine_precision@5 0.1974
cosine_precision@10 0.1
cosine_recall@1 0.8816
cosine_recall@3 0.9671
cosine_recall@5 0.9868
cosine_recall@10 1.0
cosine_ndcg@10 0.946
cosine_mrr@10 0.9282
cosine_map@100 0.9282
dot_accuracy@1 0.8816
dot_accuracy@3 0.9671
dot_accuracy@5 0.9868
dot_accuracy@10 1.0
dot_precision@1 0.8816
dot_precision@3 0.3224
dot_precision@5 0.1974
dot_precision@10 0.1
dot_recall@1 0.8816
dot_recall@3 0.9671
dot_recall@5 0.9868
dot_recall@10 1.0
dot_ndcg@10 0.946
dot_mrr@10 0.9282
dot_map@100 0.9282

Training Details

Training Dataset

Unnamed Dataset

  • Size: 714 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 714 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 11 tokens
    • mean: 18.46 tokens
    • max: 32 tokens
    • min: 21 tokens
    • mean: 175.32 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    What is the purpose of conducting adversarial testing in the context of GAI risks? Human-AI Configuration;
    Information Integrity; Harmful Bias
    and Homogenization
    AI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV

    MEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are
    informed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as
    intended. Results are documented.
    Action ID
    Suggested Action
    GAI Risks
    MS-4.2-001
    Conduct adversarial testing at a regular cadence to map and measure GAI risks,
    including tests to address attempts to deceive or manipulate the application of
    provenance techniques or other misuses. Identify vulnerabilities and
    understand potential misuse scenarios and unintended outputs.
    Information Integrity; Information
    Security
    MS-4.2-002
    Evaluate GAI system performance in real-world scenarios to observe its
    How are measurement results regarding AI system trustworthiness documented and validated? Human-AI Configuration;
    Information Integrity; Harmful Bias
    and Homogenization
    AI Actor Tasks: AI Deployment, Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV

    MEASURE 4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are
    informed by input from domain experts and relevant AI Actors to validate whether the system is performing consistently as
    intended. Results are documented.
    Action ID
    Suggested Action
    GAI Risks
    MS-4.2-001
    Conduct adversarial testing at a regular cadence to map and measure GAI risks,
    including tests to address attempts to deceive or manipulate the application of
    provenance techniques or other misuses. Identify vulnerabilities and
    understand potential misuse scenarios and unintended outputs.
    Information Integrity; Information
    Security
    MS-4.2-002
    Evaluate GAI system performance in real-world scenarios to observe its
    What types of data provenance information are included in the GAI system inventory entries? following items in GAI system inventory entries: Data provenance information
    (e.g., source, signatures, versioning, watermarks); Known issues reported from
    internal bug tracking or external information sharing resources (e.g., AI incident
    database, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles
    and responsibilities; Special rights and considerations for intellectual property,
    licensed works, or personal, privileged, proprietary or sensitive data; Underlying
    foundation models, versions of underlying models, and access modes.
    Data Privacy; Human-AI
    Configuration; Information
    Integrity; Intellectual Property;
    Value Chain and Component
    Integration
    AI Actor Tasks: Governance and Oversight
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
1.0 36 0.9145
1.3889 50 0.9256
2.0 72 0.9246
2.7778 100 0.9282

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}