policy_gte_large_5 / README.md
lw2134's picture
Add new SentenceTransformer model.
52bd27e verified
|
raw
history blame
23.5 kB
metadata
base_model: Alibaba-NLP/gte-large-en-v1.5
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
  - dot_accuracy@1
  - dot_accuracy@3
  - dot_accuracy@5
  - dot_accuracy@10
  - dot_precision@1
  - dot_precision@3
  - dot_precision@5
  - dot_precision@10
  - dot_recall@1
  - dot_recall@3
  - dot_recall@5
  - dot_recall@10
  - dot_ndcg@10
  - dot_mrr@10
  - dot_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:500
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      narrow identified goals, to avoid "mission creep."  Anticipated data
      collection should be determined to be 

      strictly necessary to the identified goals and should be minimized as much
      as possible. Data collected based on 

      these identified goals and for a specific context should not be used in a
      different context without assessing for 

      new privacy risks and implementing appropriate mitigation measures, which
      may include express consent.
    sentences:
      - >-
        What measures should be taken if data collected for specific identified
        goals is to be used in a different context?
      - >-
        What measures should be taken to ensure the privacy of sensitive data
        and limit access to it?
      - >-
        What special requirements are mentioned in the white paper regarding
        national security and defense activities in relation to trustworthy
        artificial intelligence?
  - source_sentence: >-


      Karen Levy, Assistant Professor, Department of Information Science,
      Cornell University

      

      Natasha Duarte, Project Director, Upturn

      

      Elana Zeide, Assistant Professor, University of Nebraska College of Law

      

      Fabian Rogers, Constituent Advocate, Office of NY State Senator Jabari
      Brisport and Community

      Advocate and Floor Captain, Atlantic Plaza Towers Tenants Association

      The individual panelists described the ways in which AI systems and other
      technologies are increasingly being
    sentences:
      - >-
        What are some of the challenges posed to democracy by the use of
        technology and automated systems, as mentioned in the foreword?
      - >-
        What principles has the U.S. Intelligence Community developed to guide
        personnel in the ethical use of AI?
      - >-
        What roles do the panelists hold in relation to the discussion on AI
        systems and technology?
  - source_sentence: |-
      impacts disfavoring people based on their race, color, ethnicity, 
      sex 
      (including 
      pregnancy, 
      childbirth, 
      and 
      related 
      medical 
      conditions, 
      gender 
      identity, 
      intersex 
      status, 
      and 
      sexual 
      orientation), religion, age, national origin, disability, veteran status,
    sentences:
      - >-
        What does the term "HUMAN ALTERNATIVES" refer to in the context
        provided?
      - What types of discrimination are mentioned in the context?
      - >-
        What are the expectations for automated systems in relation to public
        protection from surveillance?
  - source_sentence: >-
      establish and maintain the capabilities that will allow individuals to use
      their own automated systems to help 

      them make consent, access, and control decisions in a complex data
      ecosystem. Capabilities include machine 

      readable data, standardized data formats, metadata or tags for expressing
      data processing permissions and 

      preferences and data provenance and lineage, context of use and
      access-specific tags, and training models for 

      assessing privacy risk.
    sentences:
      - >-
        What measures should be taken to ensure that independent evaluations of
        algorithmic discrimination are conducted while balancing individual
        privacy and data access needs?
      - >-
        What capabilities are necessary for individuals to effectively manage
        consent and control decisions in a complex data ecosystem?
      - >-
        What are some examples of classifications that are protected by law
        against discrimination?
  - source_sentence: >-
      SAFE AND EFFECTIVE 

      SYSTEMS 

      WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS

      The expectations for automated systems are meant to serve as a blueprint
      for the development of additional 

      technical standards and practices that are tailored for particular sectors
      and contexts. 

      Derived data sources tracked and reviewed carefully. Data that is derived
      from other data through
    sentences:
      - >-
        What is the purpose of the expectations set for automated systems in
        relation to technical standards and practices?
      - >-
        What factors influence the appropriate application of the principles
        outlined in the white paper regarding automated systems?
      - >-
        What actions can a court take if a federal agency fails to comply with
        the Privacy Act regarding an individual's records?
model-index:
  - name: SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.88
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9866666666666667
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9866666666666667
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.88
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3288888888888888
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1973333333333333
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.88
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9866666666666667
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9866666666666667
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9499978881111136
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9330158730158731
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9330158730158731
            name: Cosine Map@100
          - type: dot_accuracy@1
            value: 0.88
            name: Dot Accuracy@1
          - type: dot_accuracy@3
            value: 0.9866666666666667
            name: Dot Accuracy@3
          - type: dot_accuracy@5
            value: 0.9866666666666667
            name: Dot Accuracy@5
          - type: dot_accuracy@10
            value: 1
            name: Dot Accuracy@10
          - type: dot_precision@1
            value: 0.88
            name: Dot Precision@1
          - type: dot_precision@3
            value: 0.3288888888888888
            name: Dot Precision@3
          - type: dot_precision@5
            value: 0.1973333333333333
            name: Dot Precision@5
          - type: dot_precision@10
            value: 0.09999999999999998
            name: Dot Precision@10
          - type: dot_recall@1
            value: 0.88
            name: Dot Recall@1
          - type: dot_recall@3
            value: 0.9866666666666667
            name: Dot Recall@3
          - type: dot_recall@5
            value: 0.9866666666666667
            name: Dot Recall@5
          - type: dot_recall@10
            value: 1
            name: Dot Recall@10
          - type: dot_ndcg@10
            value: 0.9499978881111136
            name: Dot Ndcg@10
          - type: dot_mrr@10
            value: 0.9330158730158731
            name: Dot Mrr@10
          - type: dot_map@100
            value: 0.9330158730158731
            name: Dot Map@100

SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-large-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-large-en-v1.5
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'SAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nDerived data sources tracked and reviewed carefully. Data that is derived from other data through',
    'What is the purpose of the expectations set for automated systems in relation to technical standards and practices?',
    'What factors influence the appropriate application of the principles outlined in the white paper regarding automated systems?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.88
cosine_accuracy@3 0.9867
cosine_accuracy@5 0.9867
cosine_accuracy@10 1.0
cosine_precision@1 0.88
cosine_precision@3 0.3289
cosine_precision@5 0.1973
cosine_precision@10 0.1
cosine_recall@1 0.88
cosine_recall@3 0.9867
cosine_recall@5 0.9867
cosine_recall@10 1.0
cosine_ndcg@10 0.95
cosine_mrr@10 0.933
cosine_map@100 0.933
dot_accuracy@1 0.88
dot_accuracy@3 0.9867
dot_accuracy@5 0.9867
dot_accuracy@10 1.0
dot_precision@1 0.88
dot_precision@3 0.3289
dot_precision@5 0.1973
dot_precision@10 0.1
dot_recall@1 0.88
dot_recall@3 0.9867
dot_recall@5 0.9867
dot_recall@10 1.0
dot_ndcg@10 0.95
dot_mrr@10 0.933
dot_map@100 0.933

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 500 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 500 samples:
    anchor positive
    type string string
    details
    • min: 12 tokens
    • mean: 21.76 tokens
    • max: 37 tokens
    • min: 11 tokens
    • mean: 78.92 tokens
    • max: 104 tokens
  • Samples:
    anchor positive
    What is the primary purpose of the AI Bill of Rights outlined in the October 2022 blueprint? BLUEPRINT FOR AN
    AI BILL OF
    RIGHTS
    MAKING AUTOMATED
    SYSTEMS WORK FOR
    THE AMERICAN PEOPLE
    OCTOBER 2022
    What was the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy? About this Document
    The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
    published by the White House Office of Science and Technology Policy in October 2022. This framework was
    released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
    What initiative did the OSTP announce a year prior to the release of the framework for a bill of rights for an AI-powered world? released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
    world.” Its release follows a year of public engagement to inform this initiative. The framework is available
    online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
    About the Office of Science and Technology Policy
    The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step cosine_map@100
1.0 1 0.9022
2.0 2 0.9311
3.0 3 0.9397
4.0 4 0.9330
5.0 5 0.9330
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}