anishareddyalla's picture
Add new SentenceTransformer model.
45c0279 verified
metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
  - en
library_name: sentence-transformers
license: apache-2.0
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:2231
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      Brian Pugh Chief Information Officer, Comscore Français Amazon Simple
      Storage Service (Amazon S3) is an object storage service offering
      industry-leading scalability, data availability, security, and
      performance. Learn more » 2023 Español Then, Comscore can set up its own
      privacy controls, including a mutually agreed upon join key that gives
      collaborators the ability to match data tables and perform analyses using
      a double-blind method. This method means that all parties can protect
      sensitive data, such as cookies, first-party IDs, and IP addresses, and
      run queries on combined data to gain richer, more comprehensive insights.
      “Instead of ingesting all that information and doing the analysis behind
      our firewall, we can join those things in AWS Clean Rooms and get back
      what we need,” says Brian Pugh, chief information officer at Comscore.
      Additionally, Comscore can organize its analytics by demographics or other
      categories so that it can identify trends in how groups of people interact
      with certain media. Comscore can also connect AWS Clean Rooms with Amazon
      QuickSight—a solution that provides unified business intelligence at
      hyperscale—so that it can visualize its data in one place using
      interactive, customizable dashboards. 日本語 About Comscore Get Started 한국어
      Organizations of all sizes across all industries are transforming their
      businesses and delivering on their missions every day using AWS. Contact
      our experts and start your own AWS journey today. Industry Challenge AWS
      Clean Rooms helps customers and their partners more easily and securely
      collaborate and analyze their collective datasets—without sharing or
      copying one another’s underlying data. AWS Services Used 中文 (繁體) Bahasa
      Indonesia AWS Clean Rooms. . . helps Comscore to provide the best possible
      measurement and support to our data partners to trust that the data that
      they’re providing is safe and protected. ” Ρусский عربي Analytics and
      insights provider Comscore provides a wide range of data-driven solutions
      that support planning, transacting, and measuring media across channels.
      It serves media companies and advertisers, promoting transparency and
      trust within the industry. Benefits of Using AWS 中文 (简体) Comscore turned
      to Amazon Web Services (AWS) and chose AWS Clean Rooms to uphold
      privacy-enhanced collaborations with its partners. AWS Clean Rooms helps
      Comscore’s customers and partners to securely match, analyze, and
      collaborate on their combined datasets with ease and without sharing or
      revealing underlying data. Using this solution, Comscore can invite up to
      five collaborators into an AWS Clean Room and pull pre-encrypted data into
      a configured data table from Amazon Simple Storage Service (Amazon S3), an
      object storage service built to retrieve any amount of data from anywhere.
      Media ratings company Comscore can provide richer insights to advertisers
      while maintaining data privacy by securely collaborating on its data with
      third parties using AWS Clean Rooms. Amazon QuickSight powers data-driven
      organizations with unified business intelligence (BI) at hyperscale.
    sentences:
      - >-
        How does Comscore use AWS Clean Rooms to protect sensitive data while
        collaborating with third parties?
      - >-
        How did AWS help CEHC in building a cost-effective alternate
        production/DR environment in a fraction of the time compared to a
        traditional brick-and-mortar production build?
      - >-
        How does AWS aim to democratize access to generative AI applications for
        all builders through services like Amazon Bedrock?
  - source_sentence: >-
      We convert the HTML pages on this site into smaller overlapping chunks (to
      retain some context continuity between chunks) of information and then
      convert these chunks into embeddings using the gpt-j-6b model and store
      the embeddings in OpenSearch Service. We implement the RAG functionality
      inside an AWS Lambda function with Amazon API Gateway to handle routing
      all requests to the Lambda. We implement a chatbot application in
      Streamlit which invokes the function via the API Gateway and the function
      does a similarity search in the OpenSearch Service index for the
      embeddings of user question. The matching documents (chunks) are added to
      the prompt as context by the Lambda function and then the function uses
      the flan-t5-xxl model deployed as a SageMaker endpoint to generate an
      answer to the user question. All the code for this post is available in
      the GitHub repo. The following figure represents the high-level
      architecture of the proposed solution. Figure 1: Architecture Step-by-step
      explanation: The User provides a question via the Streamlit web
      application. The Streamlit application invokes the API Gateway endpoint
      REST API. The API Gateway invokes the Lambda function. The function
      invokes the SageMaker endpoint to convert user question into embeddings.
      The function invokes invokes an OpenSearch Service API to find similar
      documents to the user question. The function creates a “prompt” with the
      user query and the “similar documents” as context and asks the SageMaker
      endpoint to generate a response. The response is provided from the
      function to the API Gateway. The API Gateway provides the response to the
      Streamlit application. The User is able to view the response on the
      Streamlit application, As illustrated in the architecture diagram, we use
      the following AWS services: SageMaker and Amazon SageMaker JumpStart for
      hosting the two LLMs. OpenSearch Service for storing the embeddings of the
      enterprise knowledge corpus and doing similarity search with user
      questions. Lambda for implementing the RAG functionality and exposing it
      as a REST endpoint via the API Gateway. Amazon SageMaker Processing jobs
      for large scale data ingestion into OpenSearch. Amazon SageMaker Studio
      for hosting the Streamlit application. AWS Identity and Access Management
      roles and policies for access management.
    sentences:
      - >-
        How can model producers and application builders effectively fine-tune
        generative foundation models to be aligned with human preferences and
        perform specific tasks accurately?
      - >-
        How do retailers lose out on revenue due to issues with search
        functionality on their websites?
      - >-
        How is the RAG functionality implemented within the AWS architecture
        described for handling user questions and providing responses via the
        Streamlit application?
  - source_sentence: >-
      Although Amazon EKS provided management capabilities, it was immediately
      apparent that we were managing infrastructure that wasn’t specifically
      tailored for inference. Forethought had to manage model inference on
      Amazon EKS ourselves, which was a burden on engineering efficiency. For
      example, in order to share expensive GPU resources between multiple
      models, we were responsible for allocating rigid memory fractions to
      models that were specified during deployment. We wanted to address the
      following key problems with our existing infrastructure: High cost – To
      ensure that each model had enough resources, we would be very conservative
      in how many models to fit per instance. This resulted in much higher costs
      for model hosting than necessary. Low reliability – Despite being
      conservative in our memory allocation, not all models have the same
      requirements, and occasionally some models would throw out of memory (OOM)
      errors. Inefficient management – We had to manage different deployment
      manifests for each type of model (such as classifiers, embeddings, and
      autocomplete), which was time-consuming and error-prone. We also had to
      maintain the logic to determine the memory allocation for different model
      types. Ultimately, we needed an inference platform to take on the heavy
      lifting of managing our models at runtime to improve the cost,
      reliability, and the management of serving our models. SageMaker MMEs
      allowed us to address these needs. Through its smart and dynamic model
      loading and unloading, and its scaling capabilities, SageMaker MMEs
      provided a significantly less expensive and more reliable solution for
      hosting our models. We are now able to fit many more models per instance
      and don’t have to worry about OOM errors because SageMaker MMEs handle
      loading and unloading models dynamically. In addition, deployments are now
      as simple as calling Boto3 SageMaker APIs and attaching the proper auto
      scaling policies. The following diagram illustrates our legacy
      architecture. To begin our migration to SageMaker MMEs, we identified the
      best use cases for MMEs and which of our models would benefit the most
      from this change. MMEs are best used for the following: Models that are
      expected to have low latency but can withstand a cold start time (when
      it’s first loaded in) Models that are called often and consistently Models
      that need partial GPU resources Models that share common requirements and
      inference logic We identified our embeddings models and autocomplete
      language models as the best candidates for our migration. To organize
      these models under MMEs, we would create one MME per model type, or task,
      one for our embeddings models, and another for autocomplete language
      models. We already had an API layer on top of our models for model
      management and inference. Our task at hand was to rework how this API was
      deploying and handling inference on models under the hood with SageMaker,
      with minimal changes to how clients and product teams interacted with the
      API. We also needed to package our models and custom inference logic to be
      compatible with NVIDIA Triton Inference Server using SageMaker MMEs.
    sentences:
      - >-
        How did the company address the issues of high cost, low reliability,
        and inefficient management in managing model inference on Amazon EKS,
        and what solution did they implement to improve the cost, reliability,
        and management of serving their models?
      - >-
        How can Aurora be configured to interface with Comprehend for analyzing
        text data?
      - >-
        How has the implementation of chatbots and voice bots powered by Amazon
        Lex improved the customer and agent experiences at WaFd Bank's contact
        center solution?
  - source_sentence: >-
      In our current approach, we store these files in Amazon S3. Although these
      stored files aren’t accessible from the browser in our version of the
      code, you can modify the code to play previously generated audio files by
      fetching them from Amazon S3 (instead of regenerating the audio for the
      text again using Amazon Polly). We have more code examples for accessing
      Amazon Polly with Python in the AWS Code Library. Create the solution The
      entire solution is available from our Github repo. To create this solution
      in your account, follow the instructions in the README. md file. The
      solution includes an AWS CloudFormation template to provision your
      resources. Cleanup To clean up the resources created in this demo, perform
      the following steps: Delete the S3 buckets created to store the
      CloudFormation template (Bucket A), the source code (Bucket B) and the
      website ( pth-cf-text-highlighter-website-[Suffix] ). Delete the
      CloudFormation stack pth-cf. Delete the S3 bucket containing the speech
      files ( pth-speech-[Suffix] ). This bucket was created by the
      CloudFormation template to store the audio and speech marks files
      generated by Amazon Polly. Summary In this post, we showed an example of a
      solution that can highlight text as it’s being spoken using Amazon Polly.
      It was developed using the Amazon Polly speech marks feature, which
      provides us markers for the place each word or sentence begins in an audio
      file. The solution is available as a CloudFormation template. It can be
      deployed as is to any web application that performs text-to-speech
      conversion. This would be useful for adding visual capabilities to audio
      in books, avatars with lip-sync capabilities (using viseme speech marks),
      websites, and blogs, and for aiding people with hearing impairments. It
      can be extended to perform additional tasks besides highlighting text. For
      example, the browser can show images, play music, and perform other
      animations on the front end while the text is being spoken. This
      capability can be useful for creating dynamic audio books, educational
      content, and richer text-to-speech applications. We welcome you to try out
      this solution and learn more about the relevant AWS services from the
      following links.
    sentences:
      - >-
        How has the TRRF platform improved patient care for individuals with
        Angelman Syndrome, according to Megan Cross of the Foundation for
        Angelman Syndrome (FAST)?
      - >-
        How does Amazon SageMaker Ground Truth Plus help users prepare
        high-quality training datasets for generative AI applications,
        specifically in terms of removing the heavy lifting associated with data
        labeling applications and managing the labeling workforce?
      - >-
        How can the solution of highlighting text as it's being spoken using
        Amazon Polly be extended to perform additional tasks, and what are some
        examples of these tasks?
  - source_sentence: >-
      CU Coventry’s bachelor of science in cloud computing course officially
      began in September 2020 and has already seen success from the program’s
      industry-driven framework. Overview Validate technical skills and cloud
      expertise to grow your career and business. Learn more » Get Started on
      AWS services using AWS Academy Learner Labs Build your cloud skills at
      your own pace, on your own time, and completely for free. Looking ahead,
      Coventry University Group plans to expand bachelor of science degree in
      cloud computing courses to its campuses in London and Wroclaw. “The
      ability to have hands-on experience with AWS services—the same ones that
      companies use in the real world—is invaluable,” said Tomasz, a student of
      the Cloud Computing Course. “Once we join the workforce, we can apply our
      skill sets and hit the ground running. ” Türkçe English Students
      successfully engaging in the program graduate with in-demand skills for
      careers in the cloud, including valuable experience with AWS services
      through AWS Academy Learner Labs. AWS Academy provides higher education
      institutions with ready-to-teach cloud computing curriculum to prepare
      students for AWS Certifications, which validate technical skills and cloud
      expertise for in-demand cloud jobs. “The most important thing is for the
      modules to reflect what the industry needs. We want students to add value
      to the global workforce,” says Flood. Taking advantage of AWS Education
      Programs, CU Coventry’s BSc degree in cloud computing innovates on AWS to
      track the IT industry’s rapid pace. AWS Certification Deutsch Coventry
      University Group is based in the United Kingdom with more than 30,000
      students and more than 200 undergraduate and postgraduate degrees across
      its schools, faculties, and campuses. Tiếng Việt AWS Training and
      Certification Italiano ไทย Outcome | Looking to the Future of Coventry
      University Group’s Cloud Computing Program Learn more » Increases
      employability Coventry University Group used AWS Education Programs to
      create a comprehensive and flexible degree to help students meet growing
      IT industry cloud skills demand. Both the 3-year bachelor of science
      degree in cloud computing and its accelerated version were developed in
      collaboration with AWS. These programs were designed by working backwards
      from the cloud skills employers are currently seeking in the UK and across
      the global labor market. “The approach gave us insights into what skill
      gaps were lacking in the industry. From there, we designed the courses,
      with the AWS team providing helpful inputs,” says Flood. “For example, the
      AWS team pointed out that there was an industry need for serverless
      computing skills, and we integrated that into our curriculum. ” Português.
    sentences:
      - >-
        How did Read use Amazon Web Services (AWS) and NVIDIA Riva to improve
        the performance of its transcription tool while keeping costs low?
      - >-
        How does RUSH University System for Health use HECAP and Amazon
        HealthLake to address healthcare disparities and improve patient
        outcomes for residents of Chicago's West Side?
      - >-
        How does CU Coventry's Bachelor of Science in Cloud Computing program
        incorporate AWS services and industry-driven insights to prepare
        students for in-demand cloud jobs?
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.4596774193548387
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8024193548387096
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8991935483870968
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9596774193548387
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.4596774193548387
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2674731182795699
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17983870967741938
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0959677419354839
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.4596774193548387
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8024193548387096
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8991935483870968
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9596774193548387
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7184810942825108
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6395305299539169
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6408821665935496
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.46774193548387094
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7983870967741935
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8951612903225806
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9596774193548387
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.46774193548387094
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2661290322580645
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17903225806451614
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0959677419354839
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.46774193548387094
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7983870967741935
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8951612903225806
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9596774193548387
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7213571757198337
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6433467741935482
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6448406697096213
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.4596774193548387
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7983870967741935
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9112903225806451
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9637096774193549
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.4596774193548387
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2661290322580645
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.18225806451612905
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0963709677419355
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.4596774193548387
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7983870967741935
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9112903225806451
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9637096774193549
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7207090934241043
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6410682283666154
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6422448191163128
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.4314516129032258
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7580645161290323
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8830645161290323
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9475806451612904
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.4314516129032258
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.25268817204301075
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17661290322580647
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09475806451612905
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.4314516129032258
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7580645161290323
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8830645161290323
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9475806451612904
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.6948316840385708
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6124535970302099
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6145615813099632
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.4032258064516129
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7459677419354839
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8709677419354839
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9516129032258065
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.4032258064516129
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.24865591397849462
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17419354838709677
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09516129032258065
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.4032258064516129
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7459677419354839
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8709677419354839
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9516129032258065
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.6800470209866719
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.5919978878648234
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.5935355054811555
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("anishareddyalla/bge-base-aws-case-studies")
# Run inference
sentences = [
    'CU Coventry’s bachelor of science in cloud computing course officially began in September 2020 and has already seen success from the program’s industry-driven framework. Overview Validate technical skills and cloud expertise to grow your career and business. Learn more » Get Started on AWS services using AWS Academy Learner Labs Build your cloud skills at your own pace, on your own time, and completely for free. Looking ahead, Coventry University Group plans to expand bachelor of science degree in cloud computing courses to its campuses in London and Wroclaw. “The ability to have hands-on experience with AWS services—the same ones that companies use in the real world—is invaluable,” said Tomasz, a student of the Cloud Computing Course. “Once we join the workforce, we can apply our skill sets and hit the ground running. ” Türkçe English Students successfully engaging in the program graduate with in-demand skills for careers in the cloud, including valuable experience with AWS services through AWS Academy Learner Labs. AWS Academy provides higher education institutions with ready-to-teach cloud computing curriculum to prepare students for AWS Certifications, which validate technical skills and cloud expertise for in-demand cloud jobs. “The most important thing is for the modules to reflect what the industry needs. We want students to add value to the global workforce,” says Flood. Taking advantage of AWS Education Programs, CU Coventry’s BSc degree in cloud computing innovates on AWS to track the IT industry’s rapid pace. AWS Certification Deutsch Coventry University Group is based in the United Kingdom with more than 30,000 students and more than 200 undergraduate and postgraduate degrees across its schools, faculties, and campuses. Tiếng Việt AWS Training and Certification Italiano ไทย Outcome | Looking to the Future of Coventry University Group’s Cloud Computing Program Learn more » Increases employability Coventry University Group used AWS Education Programs to create a comprehensive and flexible degree to help students meet growing IT industry cloud skills demand. Both the 3-year bachelor of science degree in cloud computing and its accelerated version were developed in collaboration with AWS. These programs were designed by working backwards from the cloud skills employers are currently seeking in the UK and across the global labor market. “The approach gave us insights into what skill gaps were lacking in the industry. From there, we designed the courses, with the AWS team providing helpful inputs,” says Flood. “For example, the AWS team pointed out that there was an industry need for serverless computing skills, and we integrated that into our curriculum. ” Português.',
    "How does CU Coventry's Bachelor of Science in Cloud Computing program incorporate AWS services and industry-driven insights to prepare students for in-demand cloud jobs?",
    "How does RUSH University System for Health use HECAP and Amazon HealthLake to address healthcare disparities and improve patient outcomes for residents of Chicago's West Side?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.4597
cosine_accuracy@3 0.8024
cosine_accuracy@5 0.8992
cosine_accuracy@10 0.9597
cosine_precision@1 0.4597
cosine_precision@3 0.2675
cosine_precision@5 0.1798
cosine_precision@10 0.096
cosine_recall@1 0.4597
cosine_recall@3 0.8024
cosine_recall@5 0.8992
cosine_recall@10 0.9597
cosine_ndcg@10 0.7185
cosine_mrr@10 0.6395
cosine_map@100 0.6409

Information Retrieval

Metric Value
cosine_accuracy@1 0.4677
cosine_accuracy@3 0.7984
cosine_accuracy@5 0.8952
cosine_accuracy@10 0.9597
cosine_precision@1 0.4677
cosine_precision@3 0.2661
cosine_precision@5 0.179
cosine_precision@10 0.096
cosine_recall@1 0.4677
cosine_recall@3 0.7984
cosine_recall@5 0.8952
cosine_recall@10 0.9597
cosine_ndcg@10 0.7214
cosine_mrr@10 0.6433
cosine_map@100 0.6448

Information Retrieval

Metric Value
cosine_accuracy@1 0.4597
cosine_accuracy@3 0.7984
cosine_accuracy@5 0.9113
cosine_accuracy@10 0.9637
cosine_precision@1 0.4597
cosine_precision@3 0.2661
cosine_precision@5 0.1823
cosine_precision@10 0.0964
cosine_recall@1 0.4597
cosine_recall@3 0.7984
cosine_recall@5 0.9113
cosine_recall@10 0.9637
cosine_ndcg@10 0.7207
cosine_mrr@10 0.6411
cosine_map@100 0.6422

Information Retrieval

Metric Value
cosine_accuracy@1 0.4315
cosine_accuracy@3 0.7581
cosine_accuracy@5 0.8831
cosine_accuracy@10 0.9476
cosine_precision@1 0.4315
cosine_precision@3 0.2527
cosine_precision@5 0.1766
cosine_precision@10 0.0948
cosine_recall@1 0.4315
cosine_recall@3 0.7581
cosine_recall@5 0.8831
cosine_recall@10 0.9476
cosine_ndcg@10 0.6948
cosine_mrr@10 0.6125
cosine_map@100 0.6146

Information Retrieval

Metric Value
cosine_accuracy@1 0.4032
cosine_accuracy@3 0.746
cosine_accuracy@5 0.871
cosine_accuracy@10 0.9516
cosine_precision@1 0.4032
cosine_precision@3 0.2487
cosine_precision@5 0.1742
cosine_precision@10 0.0952
cosine_recall@1 0.4032
cosine_recall@3 0.746
cosine_recall@5 0.871
cosine_recall@10 0.9516
cosine_ndcg@10 0.68
cosine_mrr@10 0.592
cosine_map@100 0.5935

Training Details

Training Dataset

Unnamed Dataset

  • Size: 2,231 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 434.98 tokens
    • max: 512 tokens
    • min: 13 tokens
    • mean: 33.46 tokens
    • max: 65 tokens
  • Samples:
    positive anchor
    ”. What specific event or topic is being discussed in the given information?
    On AWS, Rackspace solved a major industry challenge with a solution that saved time, cut costs, and reduced complexity for its customers and itself. “When things go wrong, customers expect Rackspace to step in and act swiftly to solve their problem,” says Prewitt. “Using AWS Systems Manager, we can do that much more quickly. ” Português Rackspace needed a solution that could run both on premises and on the cloud. “We wanted one tool to use across the full suite of solutions that Rackspace manages,” says Gignac. AWS Systems Manager met that requirement and offered programmability. “That’s a key differentiator of AWS: we can use AWS Systems Manager to run shell scripts on individual VMs and do advanced orchestration,” Gignac continues. . How did Rackspace use AWS Systems Manager to solve major industry challenges and improve their ability to quickly address customer issues?
    Français Shortly after the onset of the pandemic in early 2020, Valant began offering a telehealth solution to provide virtual capabilities to practices and their patients. The solution was based on a digital communications platform that lacked a multi-user experience and many other requested features. “The platform we used offered peer-to-peer video only, and we needed group capabilities, chat, screen and file sharing, and a whiteboard,” says James Jay, chief technology officer at Valant Medical Solutions. “In behavioral health, it’s common to have parents, spouses, or other guests attend sessions, and we saw a significant demand from practices for multi-user functionality, as well as other features critical to engaging effectively with patients. We also had strong demand to integrate co-payment collection into telehealth check-in workflows in advance of sessions. ” 2023 Amazon Simple Email Service Español by using voice, video, messaging, and automated reminders Valant Medical Solutions, Inc. provides electronic health record software to behavioral health providers and practices. To add enhanced telehealth capabilities and improve patient communication, the company turned to Amazon Web Services to add capabilities in voice, video, messaging, and email through AWS Communication Developer Services to build a new telehealth solution for more than 2,500 behavioral health practices. AWS Communication Developer Services (CDS) are cloud-based APIs and SDKs that help builders add communication capabilities into their apps or websites with minimal coding. 日本語 Valant Medical Solutions, Inc. designs and develops web-based electronic health record (EHR) software to help behavioral health providers and practices streamline administration tasks and improve patient outcomes. More than 20,000 behavioral health professionals in group and solo private practices across the United States use the Valant platform to treat individuals seeking behavioral healthcare. The Valant IO system has extensive capabilities to enable providers to deliver value-based care through measurement-based assessment and ongoing outcome assessments. 5% Get Started 한국어 Overview Opportunity
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9143 4 - 0.6055 0.6308 0.646 0.5623 0.6339
1.8286 8 - 0.6255 0.6505 0.6517 0.5791 0.6558
2.2857 10 2.0293 - - - - -
2.9714 13 - 0.6096 0.6472 0.6471 0.5935 0.6490
3.8857 17 - 0.6125 0.6410 0.6468 0.6020 0.6422
4.5714 20 0.5008 - - - - -
4.8 21 - 0.6156 0.6351 0.6409 0.6014 0.6391
5.9429 26 - 0.6143 0.6350 0.6367 0.6015 0.6406
6.8571 30 0.2964 0.6167 0.6371 0.6390 0.5981 0.6387
8.0 35 - 0.6138 0.6364 0.6391 0.5986 0.6392
8.9143 39 - 0.6173 0.6378 0.6389 0.6021 0.6394
9.1429 40 0.2382 0.6161 0.6376 0.6391 0.5982 0.6398
0.9143 4 - 0.6273 0.6535 0.6608 0.5949 0.66
1.8286 8 - 0.6177 0.6439 0.6515 0.6074 0.6508
2.2857 10 0.554 - - - - -
2.9714 13 - 0.6070 0.6300 0.6339 0.5923 0.6366
3.8857 17 - 0.6071 0.6332 0.6362 0.5976 0.6362
4.5714 20 0.2694 - - - - -
4.8 21 - 0.6124 0.6397 0.6455 0.5988 0.6404
5.9429 26 - 0.6155 0.6411 0.6446 0.6007 0.6429
6.8571 30 0.1746 0.6167 0.6429 0.6467 0.5942 0.6424
8.0 35 - 0.6166 0.6398 0.6462 0.5928 0.6429
8.9143 39 - 0.6108 0.6426 0.6448 0.5943 0.6432
9.1429 40 0.1419 0.6146 0.6422 0.6448 0.5935 0.6409
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}