policy_gte_large / README.md
lw2134's picture
Add new SentenceTransformer model.
acb3c2e verified
|
raw
history blame
28.2 kB
metadata
base_model: Alibaba-NLP/gte-large-en-v1.5
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
  - dot_accuracy@1
  - dot_accuracy@3
  - dot_accuracy@5
  - dot_accuracy@10
  - dot_precision@1
  - dot_precision@3
  - dot_precision@5
  - dot_precision@10
  - dot_recall@1
  - dot_recall@3
  - dot_recall@5
  - dot_recall@10
  - dot_ndcg@10
  - dot_mrr@10
  - dot_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:500
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      1. What measures should be taken to avoid "mission creep" when identifying
      goals for data collection?  

      2. Why is it important to assess new privacy risks before using collected
      data in a different context?
    sentences:
      - >-
        narrow identified goals, to avoid "mission creep."  Anticipated data
        collection should be determined to be 

        strictly necessary to the identified goals and should be minimized as
        much as possible. Data collected based on 

        these identified goals and for a specific context should not be used in
        a different context without assessing for 

        new privacy risks and implementing appropriate mitigation measures,
        which may include express consent.
      - >-
        Promoting the Use of Trustworthy Artificial Intelligence in the Federal
        Government (December 2020). 

        This white paper recognizes that national security (which includes
        certain law enforcement and 

        homeland security activities) and defense activities are of increased
        sensitivity and interest to our nation’s 

        adversaries and are often subject to special requirements, such as those
        governing classified information and 

        other protected data. Such activities require alternative, compatible
        safeguards through existing policies that
      - >-
        establish and maintain the capabilities that will allow individuals to
        use their own automated systems to help 

        them make consent, access, and control decisions in a complex data
        ecosystem. Capabilities include machine 

        readable data, standardized data formats, metadata or tags for
        expressing data processing permissions and 

        preferences and data provenance and lineage, context of use and
        access-specific tags, and training models for 

        assessing privacy risk.
  - source_sentence: >-
      1. What types of discrimination are mentioned in the context that can
      impact individuals based on their race and ethnicity?  

      2. How does the context address discrimination related to gender identity
      and sexual orientation?
    sentences:
      - |-
        HUMAN ALTERNATIVES, CONSIDERATION
        ALLBACK
        F
        AND
        , 
        46
      - >-
        SAFE AND EFFECTIVE 

        SYSTEMS 

        WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS

        The expectations for automated systems are meant to serve as a blueprint
        for the development of additional 

        technical standards and practices that are tailored for particular
        sectors and contexts. 

        Derived data sources tracked and reviewed carefully. Data that is
        derived from other data through
      - >-
        impacts disfavoring people based on their race, color, ethnicity, 

        sex 

        (including 

        pregnancy, 

        childbirth, 

        and 

        related 

        medical 

        conditions, 

        gender 

        identity, 

        intersex 

        status, 

        and 

        sexual 

        orientation), religion, age, national origin, disability, veteran
        status,
  - source_sentence: >-
      1. What roles do the panelists hold in their respective organizations?  

      2. How are AI systems and other technologies being discussed in relation
      to their impact by the individual panelists?
    sentences:
      - >-
        requirements of the Federal agencies that enforce them. These principles
        are not intended to, and do not, 

        prohibit or limit any lawful activity of a government agency, including
        law enforcement, national security, or 

        intelligence activities. 

        The appropriate application of the principles set forth in this white
        paper depends significantly on the 

        context in which automated systems are being utilized. In some
        circumstances, application of these principles
      - >-


        Karen Levy, Assistant Professor, Department of Information Science,
        Cornell University

        

        Natasha Duarte, Project Director, Upturn

        

        Elana Zeide, Assistant Professor, University of Nebraska College of Law

        

        Fabian Rogers, Constituent Advocate, Office of NY State Senator Jabari
        Brisport and Community

        Advocate and Floor Captain, Atlantic Plaza Towers Tenants Association

        The individual panelists described the ways in which AI systems and
        other technologies are increasingly being
      - >-
        SECTION TITLE­

        FOREWORD

        Among the great challenges posed to democracy today is the use of
        technology, data, and automated systems in 

        ways that threaten the rights of the American public. Too often, these
        tools are used to limit our opportunities and 

        prevent our access to critical resources or services. These problems are
        well documented. In America and around 

        the world, systems supposed to help with patient care have proven
        unsafe, ineffective, or biased. Algorithms used
  - source_sentence: >-
      1. What are the key tenets of the Department of Defense's Artificial
      Intelligence Ethical Principles?  

      2. How do the Principles of Artificial Intelligence Ethics for the
      Intelligence Community guide personnel in their use of AI?
    sentences:
      - >-
        different treatment or impacts disfavoring people based on their race,
        color, ethnicity, sex (including 

        pregnancy, childbirth, and related medical conditions, gender identity,
        intersex status, and sexual 

        orientation), religion, age, national origin, disability, veteran
        status, genetic information, or any other 

        classification protected by law. Depending on the specific
        circumstances, such algorithmic discrimination
      - >-
        ethical use and development of AI systems.20 The Department of Defense
        has adopted Artificial Intelligence 

        Ethical Principles, and tenets for Responsible Artificial Intelligence
        specifically tailored to its national 

        security and defense activities.21 Similarly, the U.S. Intelligence
        Community (IC) has developed the Principles 

        of Artificial Intelligence Ethics for the Intelligence Community to
        guide personnel on whether and how to
      - >-
        DATA PRIVACY 

        WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS

        The expectations for automated systems are meant to serve as a blueprint
        for the development of additional 

        technical standards and practices that are tailored for particular
        sectors and contexts. 

        Protect the public from unchecked surveillance 

        Heightened oversight of surveillance. Surveillance or monitoring systems
        should be subject to
  - source_sentence: >-
      1. What measures should be taken to ensure the accuracy and timeliness of
      data?  

      2. Why is it important to limit access to sensitive data and derived data?
    sentences:
      - >-
        maintain accurate, timely, and complete data. 

        Limit access to sensitive data and derived data. Sensitive data and
        derived data should not be sold, 

        shared, or made public as part of data brokerage or other agreements.
        Sensitive data includes data that can be 

        used to infer sensitive information; even systems that are not directly
        marketed as sensitive domain technologies 

        are expected to keep sensitive data private. Access to such data should
        be limited based on necessity and based
      - >-
        comply with the Privacy Act’s requirements. Among other things, a court
        may order a federal agency to amend or 

        correct an individual’s information in its records or award monetary
        damages if an inaccurate, irrelevant, untimely, 

        or incomplete record results in an adverse determination about an
        individual’s “qualifications, character, rights,  

        opportunities…, or benefits.” 

        NIST’s Privacy Framework provides a comprehensive, detailed and
        actionable approach for
      - >-
        made public whenever possible. Care will need to be taken to balance
        individual privacy with evaluation data 

        access needs. 

        Reporting. When members of the public wish to know what data about them
        is being used in a system, the 

        entity responsible for the development of the system should respond
        quickly with a report on the data it has 

        collected or stored about them. Such a report should be
        machine-readable, understandable by most users, and
model-index:
  - name: SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9733333333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9733333333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.33333333333333326
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19999999999999996
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9733333333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9901581267619055
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9866666666666667
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9866666666666667
            name: Cosine Map@100
          - type: dot_accuracy@1
            value: 0.9733333333333334
            name: Dot Accuracy@1
          - type: dot_accuracy@3
            value: 1
            name: Dot Accuracy@3
          - type: dot_accuracy@5
            value: 1
            name: Dot Accuracy@5
          - type: dot_accuracy@10
            value: 1
            name: Dot Accuracy@10
          - type: dot_precision@1
            value: 0.9733333333333334
            name: Dot Precision@1
          - type: dot_precision@3
            value: 0.33333333333333326
            name: Dot Precision@3
          - type: dot_precision@5
            value: 0.19999999999999996
            name: Dot Precision@5
          - type: dot_precision@10
            value: 0.09999999999999998
            name: Dot Precision@10
          - type: dot_recall@1
            value: 0.9733333333333334
            name: Dot Recall@1
          - type: dot_recall@3
            value: 1
            name: Dot Recall@3
          - type: dot_recall@5
            value: 1
            name: Dot Recall@5
          - type: dot_recall@10
            value: 1
            name: Dot Recall@10
          - type: dot_ndcg@10
            value: 0.9901581267619055
            name: Dot Ndcg@10
          - type: dot_mrr@10
            value: 0.9866666666666667
            name: Dot Mrr@10
          - type: dot_map@100
            value: 0.9866666666666667
            name: Dot Map@100

SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-large-en-v1.5. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-large-en-v1.5
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("lw2134/policy_gte_large")
# Run inference
sentences = [
    '1. What measures should be taken to ensure the accuracy and timeliness of data?  \n2. Why is it important to limit access to sensitive data and derived data?',
    'maintain accurate, timely, and complete data. \nLimit access to sensitive data and derived data. Sensitive data and derived data should not be sold, \nshared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be \nused to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies \nare expected to keep sensitive data private. Access to such data should be limited based on necessity and based',
    'comply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or \ncorrect an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, \nor incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … \nopportunities…, or benefits.” \nNIST’s Privacy Framework provides a comprehensive, detailed and actionable approach for',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9733
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9733
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9733
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9902
cosine_mrr@10 0.9867
cosine_map@100 0.9867
dot_accuracy@1 0.9733
dot_accuracy@3 1.0
dot_accuracy@5 1.0
dot_accuracy@10 1.0
dot_precision@1 0.9733
dot_precision@3 0.3333
dot_precision@5 0.2
dot_precision@10 0.1
dot_recall@1 0.9733
dot_recall@3 1.0
dot_recall@5 1.0
dot_recall@10 1.0
dot_ndcg@10 0.9902
dot_mrr@10 0.9867
dot_map@100 0.9867

Training Details

Training Dataset

Unnamed Dataset

  • Size: 500 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 500 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 27 tokens
    • mean: 40.71 tokens
    • max: 62 tokens
    • min: 11 tokens
    • mean: 78.92 tokens
    • max: 104 tokens
  • Samples:
    sentence_0 sentence_1
    1. What is the purpose of the AI Bill of Rights mentioned in the context?
    2. When was the Blueprint for an AI Bill of Rights published?
    BLUEPRINT FOR AN
    AI BILL OF
    RIGHTS
    MAKING AUTOMATED
    SYSTEMS WORK FOR
    THE AMERICAN PEOPLE
    OCTOBER 2022
    1. What is the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy?
    2. When was the Blueprint for an AI Bill of Rights released in relation to the announcement of the process to develop it?
    About this Document
    The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
    published by the White House Office of Science and Technology Policy in October 2022. This framework was
    released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
    1. What initiative did the OSTP announce the launch of one year prior to the release mentioned in the context?
    2. Where can the framework for the AI bill of rights be accessed online?
    released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
    world.” Its release follows a year of public engagement to inform this initiative. The framework is available
    online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
    About the Office of Science and Technology Policy
    The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
1.0 25 0.9867
2.0 50 0.9867
3.0 75 0.9867

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}