anishareddyalla commited on
Commit
f5f5c82
1 Parent(s): 229ce4e

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,973 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language:
5
+ - en
6
+ library_name: sentence-transformers
7
+ license: apache-2.0
8
+ metrics:
9
+ - cosine_accuracy@1
10
+ - cosine_accuracy@3
11
+ - cosine_accuracy@5
12
+ - cosine_accuracy@10
13
+ - cosine_precision@1
14
+ - cosine_precision@3
15
+ - cosine_precision@5
16
+ - cosine_precision@10
17
+ - cosine_recall@1
18
+ - cosine_recall@3
19
+ - cosine_recall@5
20
+ - cosine_recall@10
21
+ - cosine_ndcg@10
22
+ - cosine_mrr@10
23
+ - cosine_map@100
24
+ pipeline_tag: sentence-similarity
25
+ tags:
26
+ - sentence-transformers
27
+ - sentence-similarity
28
+ - feature-extraction
29
+ - generated_from_trainer
30
+ - dataset_size:2231
31
+ - loss:MatryoshkaLoss
32
+ - loss:MultipleNegativesRankingLoss
33
+ widget:
34
+ - source_sentence: The fact that no customer noticed this major migration to Amazon
35
+ S3 Glacier Instant Retrieval was a big win for us. It was a seamless experience
36
+ for end users, and we had no production issues during the entire migration. ”
37
+ Contact Sales Greater than 99. 99% Outcome | Gaining Insights on AWS to Prioritize
38
+ Business Needs 한국어 Snap migrated more than 2 exabytes of data—roughly equivalent
39
+ to 1. 5 trillion media files—seamlessly to Amazon S3 Glacier Instant Retrieval
40
+ from Amazon S3 Standard-IA. “The fact that no customer noticed this major migration
41
+ to Amazon S3 Glacier Instant Retrieval was a big win for us,” says Manoharan.
42
+ “It was a seamless experience for Snapchatters, and we had no production issues
43
+ during the entire migration. ” As a result of the migration, the company saved
44
+ tens of millions of dollars on storage. Snap has configured Amazon S3 in 20 AWS
45
+ Regions around the world so that customers anywhere can retrieve data in milliseconds.
46
+ The AWS Global Infrastructure is the most secure, extensive, and reliable Global
47
+ Cloud Infrastructure for a business’s applications. The global reach of AWS lets
48
+ Snap store media closer to the place where Snapchatters are creating it for optimal
49
+ performance. Snap is also able to deliver content efficiently using Amazon CloudFront,
50
+ a content delivery network service built for high performance, security, and availability.
51
+ “We’ve been able to off-load all of the regionalization work and costs to AWS
52
+ so that we can focus on developing new features,” says Manoharan. As a result,
53
+ Snapchat continues to meet its quarterly cost-optimization goals. Overview | Opportunity
54
+ | Solution | Outcome | AWS Services Used 2 exabytes Amazon Simple Storage Service
55
+ (Amazon S3) is an object storage service offering industry-leading scalability,
56
+ data availability, security, and performance. … In 2016, Snap migrated its data
57
+ to AWS. “We chose to migrate to AWS because of its global reach, excellent performance,
58
+ and competitive pricing that, in turn, gave us the ability to reinvest in our
59
+ business,” says Vijay Manoharan, manager of the media delivery platform team at
60
+ Snap. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers
61
+ the lowest-cost storage for long-lived data that is rarely accessed and requires
62
+ retrieval in milliseconds. AWS Services Used In 2017, Snap migrated one of the
63
+ app’s most central features—Snapchat Stories—to Amazon DynamoDB, a fully managed,
64
+ serverless, NoSQL database designed to run high-performance applications at virtually
65
+ any scale. Using Amazon DynamoDB, the company experienced greater than 99.
66
+ sentences:
67
+ - How did Snap save tens of millions of dollars on storage as a result of migrating
68
+ to Amazon S3 Glacier Instant Retrieval from Amazon S3 Standard-IA?
69
+ - How has Panasonic Avionics Corporation leveraged Amazon Aurora MySQL-Compatible
70
+ Edition and other AWS services to improve the reliability and scalability of its
71
+ databases for in-flight entertainment and communications systems?
72
+ - How does Ground Truth Plus ensure the quality of image and video captions generated
73
+ by human annotators?
74
+ - source_sentence: ” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories
75
+ / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries
76
+ are transforming their businesses and delivering on their missions every day using
77
+ AWS. Contact our experts and start your own AWS journey today. Outcome | Expanding
78
+ Intelligent Features of Virtual Care Amazon Transcribe is an automatic speech
79
+ recognition service that makes it easy to add speech to text capabilities to any
80
+ application. Learn more » Learn more » It is critical that video visits are secure,
81
+ responsive, and reliable. Using AWS helps us provide all this in a performant
82
+ and scalable way. " Overview With the Amazon Chime SDK, builders can easily add
83
+ real-time voice, video, and messaging powered by machine learning into their applications.
84
+ Get Started Beyond traditional use cases, Salesforce is adding capabilities in
85
+ medication-therapy management, connectivity for care coordinators, and other approaches
86
+ for patient engagement. The company is developing a new feature that will expand
87
+ its support of Virtual Care sessions to multiple participants, instead of just
88
+ clinician and patient. This will facilitate care-team coordination with multiple
89
+ parties in a single meeting. Using AWS, Salesforce circumvented the heavy lifting
90
+ that would have been required to build and maintain a video-calling solution from
91
+ scratch. Patients self-schedule virtual appointments, coordinate previsit activities,
92
+ and conduct virtual visits in a HIPAA-compliant environment. A patient’s appointment
93
+ request gets routed to Amazon Chime SDK. Clinicians then review a patient’s intake
94
+ form and correlate the patient to a Virtual Care session using Amazon Chime SDK
95
+ messaging, which connects providers and patients with secure, scalable messaging
96
+ in their web and mobile applications. The Amazon Chime SDK control plane sends
97
+ event notifications through a default event bus to Amazon EventBridge, a serverless
98
+ event bus that helps organizations receive, filter, transform, route, and deliver
99
+ events. Healthcare professionals deliver care over the internet in near real time,
100
+ which has significantly reduced no-shows for appointments. “Using Amazon Chime
101
+ SDK, we don’t have to worry about the mechanics of the video call,” Daftari says.
102
+ “We can focus on features and functions that help differentiate our product in
103
+ the marketplace, while also significantly improving our speed to launch. ” Salesforce
104
+ further supports accessibility through embedding closed-captioning of video calls
105
+ using Amazon Chime SDK live transcription. Amazon Chime SDK sends live audio streams
106
+ to Amazon Transcribe, which automatically converts speech to text. Salesforce
107
+ Health Cloud customers can use the live transcription capability to display subtitles,
108
+ create meeting transcripts, or analyze content.
109
+ sentences:
110
+ - How did DB Energie use Amazon SageMaker and AWS to enhance the sustainability
111
+ and reliability of its power grid operations?
112
+ - How did Provectus assist Earth.com in enhancing the AI-powered image recognition
113
+ capabilities of EarthSnap and reducing engineering heavy lifting through the implementation
114
+ of end-to-end ML pipelines and managed MLOps platform?
115
+ - How does Salesforce use AWS services such as Amazon Chime SDK and Amazon Transcribe
116
+ to enhance their Virtual Care sessions for healthcare professionals and patients?
117
+ - source_sentence: It’s been a great success. ” Overview 93% Validate technical skills
118
+ and cloud expertise to grow your career and business. Learn more » Amazon Web
119
+ Services (AWS) Education Programs collaborate with education institutions and
120
+ the public sector to provide access for individuals to develop cloud computing
121
+ and digital skills. To help graduates boost their employability, Staffordshire
122
+ University worked with the AWS team to introduce cloud computing skills training
123
+ and add cloud courses to its credit-bearing computer science modules. Staffordshire
124
+ University offers courses through AWS Academy, which empowers higher education
125
+ institutions to prepare students for industry-recognized certifications and careers.
126
+ Since the university added AWS Academy courses to its curriculum in 2017, several
127
+ hundred students have participated. Of those students, 93 percent have achieved
128
+ employment within 6 months of graduation. Empowered students Türkçe Solution |
129
+ Learning by Doing Using AWS Learner Labs English With AWS Academy, our students
130
+ love that they’re not just taking theory lessons. They get to work in actual environments
131
+ with real AWS tools. ” Next up, Staffordshire University is expanding on the success
132
+ of its cloud courses by launching additional programs of study developed in collaboration
133
+ with the AWS team. Staffordshire University and the AWS team designed these programs
134
+ by "Working Backwards" — an Amazon process that encourages companies to brainstorm
135
+ solutions by using a customer challenge as the starting point — from the cloud
136
+ skills employers are currently seeking in the United Kingdom and across the global
137
+ labor market. One of these programs, which launches in September 2022, is a cloud
138
+ computing course that features both cloud computing and cybersecurity modules
139
+ and will offer students more opportunities to discover what’s possible with the
140
+ AWS Cloud. “What we want to encourage is for students to play with AWS services
141
+ as well as build confidence with the tools,” says Dr. Champion. to learn remotely
142
+ using any hardware and earn AWS Certifications Staffordshire University added
143
+ cloud computing skills training to its curriculum using AWS Education Programs,
144
+ helping 93 percent of participants find employment within 6 months of graduation.
145
+ covering cloud skills AWS Certification during the AWS Educate University Challenge
146
+ Deutsch of graduates find jobs within 6 months Tiếng Việt Italiano ไทย Outcome
147
+ | Developing New Cloud Coursework About Staffordshire University Staffordshire
148
+ University is a public research university in Staffordshire, England. Founded
149
+ in 1914, the university serves over 15,000 students across three schools and four
150
+ campuses. The United Kingdom has experienced a technology boom in recent years,
151
+ with technology funding tripling in the first 6 months of 2021 compared to the
152
+ same period in 2020. In particular, employers need professionals with cloud computing
153
+ skills ranging from cloud development to machine learning and data analytics.
154
+ To meet demand, Staffordshire University offers students their choice of six AWS
155
+ courses covering these key skills and more.
156
+ sentences:
157
+ - How has the collaboration between Staffordshire University and the AWS team impacted
158
+ the employability of graduates in the field of cloud computing?
159
+ - How can the confidence scores be used to verify the accuracy of sentiment assignments
160
+ in the sentiment_results_final table, especially for any dubious sentiment assignments?
161
+ - How did migrating to AWS help Travian Games improve the stability and reliability
162
+ of their game servers, and what impact did this have on their players' experience?
163
+ - source_sentence: Contact our experts and start your own AWS journey today. customer
164
+ and agent experience 2022 Overview WaFd Bank Transforms Contact Centers Using
165
+ Conversational AI on AWS Customer Stories / Financial Services WaFd uses a data
166
+ lake on AWS to store and analyze data from phone and chatbot conversations. “We’re
167
+ getting incredible data from AWS through the conversational logs,” says Hubbard.
168
+ “That has given us insights into what our customers are asking for so that we
169
+ can add more self-service functionality. ” The data also gives WaFd more insight
170
+ into call volumes, so the call center can better manage staff schedules. Opportunity
171
+ | Using Amazon Lex to Implement an AI-Powered Contact Center Solution Türkçe English
172
+ WaFd is a US retail and commercial bank with over 200 branches in eight states.
173
+ In 2019, WaFd founded subsidiary Pike Street Labs, a fintech startup, to drive
174
+ client-facing digital innovation for the bank. “Banks need to meet customers’
175
+ digital expectations,” says Dustin Hubbard, chief technology officer at WaFd Bank
176
+ and Pike Street Labs. “Every year, customers expect more innovation because that’s
177
+ what they see from new entrants or in other markets. ” Pike Street Labs redesigned
178
+ WaFd’s online banking solution to provide personalized customer experiences and
179
+ began tackling the bank’s customer care center. The company’s previous contact
180
+ center solution used dated technology with limited features spread across disparate
181
+ systems. This led to long wait times for customers and frustration for agents,
182
+ who had to answer incoming calls without prior knowledge of what the customer
183
+ needed. Agents also bore the burden of identifying fraudulent calls. WaFd needed
184
+ a solution to improve both the customer and agent experiences. Previously, WaFd
185
+ used two different systems in its customer care center to manage its voice and
186
+ chat-based customer interactions, with no way for one system to recognize that
187
+ an agent was busy on the other. Chat messages remained unanswered because agents
188
+ would forget to sign in to the chat system. The company implemented chatbots and
189
+ voice bots powered by Amazon Lex. Now, the call and chat systems are interoperable,
190
+ and chats can be escalated to agent assisted calls when needed. When a call gets
191
+ passed to an agent, the system also passes the full chat record and an analysis
192
+ of the customer’s tone so that the agent is prepared to address the client’s needs
193
+ and be empathetic toward the caller’s sentiment. WaFd worked with the AWS and
194
+ Talkdesk teams to create and launch its new contact center solution in July 2022.
195
+ sentences:
196
+ - How did Yellow Class optimize its video files and improve performance using AWS
197
+ services such as AWS Elemental MediaConvert?
198
+ - How has FanDuel ensured the redundancy and reliability of its live video streams
199
+ through the use of AWS Elemental MediaConnect and AWS Elemental MediaLive?
200
+ - How did WaFd Bank use data from phone and chatbot conversations stored in a data
201
+ lake on AWS to improve self-service functionality and better manage call center
202
+ staff schedules?
203
+ - source_sentence: 'Alternatively, you can run the inference via code. Here is one
204
+ example written in Python, using the requests library: import requests url = "https://<YOUR_API_GATEWAY_ENDPOINT_ID>.
205
+ execute-api. <YOUR_ENDPOINT_REGION>. amazonaws. com/prod/question?question=\"What
206
+ is the color of my car now?\"&context=\"My car used to be blue but I painted red\""
207
+ response = requests. request("GET", url, headers=headers, data=payload) print(response.
208
+ text) The code outputs a string similar to the following: ''{"score":0. 6947233080863953,"start":38,"end":41,"answer":"red"}''
209
+ If you are interested in knowing more about deploying Generative AI and large
210
+ language models on AWS, check out here: Deploy Serverless Generative AI on AWS
211
+ Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large
212
+ model inference containers Clean up Inside the root directory of your repository,
213
+ run the following code to clean up your resources: make destroy Conclusion In
214
+ this post, we introduced how you can use Lambda to deploy your trained ML model
215
+ using your preferred web application framework, such as FastAPI. We provided a
216
+ detailed code repository that you can deploy, and you retain the flexibility of
217
+ switching to whichever trained model artifacts you process. The performance can
218
+ depend on how you implement and deploy the model. You are welcome to try it out
219
+ yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li
220
+ is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting
221
+ the Nordics customers. She enjoys helping customers with the architecture, design,
222
+ and development of cloud-optimized infrastructure solutions. She is specialized
223
+ in AI and Machine Learning and is interested in empowering customers with intelligence
224
+ in their AI/ML applications. In her spare time, she is also a part-time illustrator
225
+ who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer
226
+ from AWS based in Zurich, Switzerland. He engages with customers and helps them
227
+ implement scalable and fully-functional ML applications. He is passionate about
228
+ building and productionizing machine learning applications for customers and is
229
+ always keen to explore around new trends and cutting-edge technologies in the
230
+ AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments
231
+ Resources Getting Started What''s New Blog Topics Amazon Comprehend Amazon Kendra
232
+ Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow
233
+ Twitter Facebook LinkedIn Twitch Email Updates.'
234
+ sentences:
235
+ - How did ALTBalaji use AWS Elemental MediaLive to handle a tenfold increase in
236
+ viewership during the live streaming of Lock Upp, and what insights did they gain
237
+ from this experience?
238
+ - How has PayEye been able to accelerate their development process and enter the
239
+ production phase within a few months using AWS services, and what impact has this
240
+ had on their recruitment efforts and team focus?
241
+ - How can Lambda be used to deploy trained ML models using a preferred web application
242
+ framework?
243
+ model-index:
244
+ - name: BGE base Financial Matryoshka
245
+ results:
246
+ - task:
247
+ type: information-retrieval
248
+ name: Information Retrieval
249
+ dataset:
250
+ name: dim 768
251
+ type: dim_768
252
+ metrics:
253
+ - type: cosine_accuracy@1
254
+ value: 0.5120967741935484
255
+ name: Cosine Accuracy@1
256
+ - type: cosine_accuracy@3
257
+ value: 0.8266129032258065
258
+ name: Cosine Accuracy@3
259
+ - type: cosine_accuracy@5
260
+ value: 0.9233870967741935
261
+ name: Cosine Accuracy@5
262
+ - type: cosine_accuracy@10
263
+ value: 0.9637096774193549
264
+ name: Cosine Accuracy@10
265
+ - type: cosine_precision@1
266
+ value: 0.5120967741935484
267
+ name: Cosine Precision@1
268
+ - type: cosine_precision@3
269
+ value: 0.2755376344086021
270
+ name: Cosine Precision@3
271
+ - type: cosine_precision@5
272
+ value: 0.18467741935483872
273
+ name: Cosine Precision@5
274
+ - type: cosine_precision@10
275
+ value: 0.09637096774193549
276
+ name: Cosine Precision@10
277
+ - type: cosine_recall@1
278
+ value: 0.5120967741935484
279
+ name: Cosine Recall@1
280
+ - type: cosine_recall@3
281
+ value: 0.8266129032258065
282
+ name: Cosine Recall@3
283
+ - type: cosine_recall@5
284
+ value: 0.9233870967741935
285
+ name: Cosine Recall@5
286
+ - type: cosine_recall@10
287
+ value: 0.9637096774193549
288
+ name: Cosine Recall@10
289
+ - type: cosine_ndcg@10
290
+ value: 0.7538879073840729
291
+ name: Cosine Ndcg@10
292
+ - type: cosine_mrr@10
293
+ value: 0.6844038018433181
294
+ name: Cosine Mrr@10
295
+ - type: cosine_map@100
296
+ value: 0.6858592666542238
297
+ name: Cosine Map@100
298
+ - task:
299
+ type: information-retrieval
300
+ name: Information Retrieval
301
+ dataset:
302
+ name: dim 512
303
+ type: dim_512
304
+ metrics:
305
+ - type: cosine_accuracy@1
306
+ value: 0.532258064516129
307
+ name: Cosine Accuracy@1
308
+ - type: cosine_accuracy@3
309
+ value: 0.8225806451612904
310
+ name: Cosine Accuracy@3
311
+ - type: cosine_accuracy@5
312
+ value: 0.9193548387096774
313
+ name: Cosine Accuracy@5
314
+ - type: cosine_accuracy@10
315
+ value: 0.967741935483871
316
+ name: Cosine Accuracy@10
317
+ - type: cosine_precision@1
318
+ value: 0.532258064516129
319
+ name: Cosine Precision@1
320
+ - type: cosine_precision@3
321
+ value: 0.27419354838709675
322
+ name: Cosine Precision@3
323
+ - type: cosine_precision@5
324
+ value: 0.18387096774193548
325
+ name: Cosine Precision@5
326
+ - type: cosine_precision@10
327
+ value: 0.09677419354838711
328
+ name: Cosine Precision@10
329
+ - type: cosine_recall@1
330
+ value: 0.532258064516129
331
+ name: Cosine Recall@1
332
+ - type: cosine_recall@3
333
+ value: 0.8225806451612904
334
+ name: Cosine Recall@3
335
+ - type: cosine_recall@5
336
+ value: 0.9193548387096774
337
+ name: Cosine Recall@5
338
+ - type: cosine_recall@10
339
+ value: 0.967741935483871
340
+ name: Cosine Recall@10
341
+ - type: cosine_ndcg@10
342
+ value: 0.7596718979684643
343
+ name: Cosine Ndcg@10
344
+ - type: cosine_mrr@10
345
+ value: 0.6912602406554021
346
+ name: Cosine Mrr@10
347
+ - type: cosine_map@100
348
+ value: 0.6924236134719179
349
+ name: Cosine Map@100
350
+ - task:
351
+ type: information-retrieval
352
+ name: Information Retrieval
353
+ dataset:
354
+ name: dim 256
355
+ type: dim_256
356
+ metrics:
357
+ - type: cosine_accuracy@1
358
+ value: 0.5241935483870968
359
+ name: Cosine Accuracy@1
360
+ - type: cosine_accuracy@3
361
+ value: 0.8225806451612904
362
+ name: Cosine Accuracy@3
363
+ - type: cosine_accuracy@5
364
+ value: 0.9193548387096774
365
+ name: Cosine Accuracy@5
366
+ - type: cosine_accuracy@10
367
+ value: 0.9596774193548387
368
+ name: Cosine Accuracy@10
369
+ - type: cosine_precision@1
370
+ value: 0.5241935483870968
371
+ name: Cosine Precision@1
372
+ - type: cosine_precision@3
373
+ value: 0.27419354838709675
374
+ name: Cosine Precision@3
375
+ - type: cosine_precision@5
376
+ value: 0.1838709677419355
377
+ name: Cosine Precision@5
378
+ - type: cosine_precision@10
379
+ value: 0.0959677419354839
380
+ name: Cosine Precision@10
381
+ - type: cosine_recall@1
382
+ value: 0.5241935483870968
383
+ name: Cosine Recall@1
384
+ - type: cosine_recall@3
385
+ value: 0.8225806451612904
386
+ name: Cosine Recall@3
387
+ - type: cosine_recall@5
388
+ value: 0.9193548387096774
389
+ name: Cosine Recall@5
390
+ - type: cosine_recall@10
391
+ value: 0.9596774193548387
392
+ name: Cosine Recall@10
393
+ - type: cosine_ndcg@10
394
+ value: 0.7527772429981233
395
+ name: Cosine Ndcg@10
396
+ - type: cosine_mrr@10
397
+ value: 0.6846406169994881
398
+ name: Cosine Mrr@10
399
+ - type: cosine_map@100
400
+ value: 0.6862769216923534
401
+ name: Cosine Map@100
402
+ - task:
403
+ type: information-retrieval
404
+ name: Information Retrieval
405
+ dataset:
406
+ name: dim 128
407
+ type: dim_128
408
+ metrics:
409
+ - type: cosine_accuracy@1
410
+ value: 0.4959677419354839
411
+ name: Cosine Accuracy@1
412
+ - type: cosine_accuracy@3
413
+ value: 0.7903225806451613
414
+ name: Cosine Accuracy@3
415
+ - type: cosine_accuracy@5
416
+ value: 0.8911290322580645
417
+ name: Cosine Accuracy@5
418
+ - type: cosine_accuracy@10
419
+ value: 0.9556451612903226
420
+ name: Cosine Accuracy@10
421
+ - type: cosine_precision@1
422
+ value: 0.4959677419354839
423
+ name: Cosine Precision@1
424
+ - type: cosine_precision@3
425
+ value: 0.26344086021505375
426
+ name: Cosine Precision@3
427
+ - type: cosine_precision@5
428
+ value: 0.17822580645161293
429
+ name: Cosine Precision@5
430
+ - type: cosine_precision@10
431
+ value: 0.09556451612903227
432
+ name: Cosine Precision@10
433
+ - type: cosine_recall@1
434
+ value: 0.4959677419354839
435
+ name: Cosine Recall@1
436
+ - type: cosine_recall@3
437
+ value: 0.7903225806451613
438
+ name: Cosine Recall@3
439
+ - type: cosine_recall@5
440
+ value: 0.8911290322580645
441
+ name: Cosine Recall@5
442
+ - type: cosine_recall@10
443
+ value: 0.9556451612903226
444
+ name: Cosine Recall@10
445
+ - type: cosine_ndcg@10
446
+ value: 0.73375586078758
447
+ name: Cosine Ndcg@10
448
+ - type: cosine_mrr@10
449
+ value: 0.6613495263696876
450
+ name: Cosine Mrr@10
451
+ - type: cosine_map@100
452
+ value: 0.6630698645438532
453
+ name: Cosine Map@100
454
+ - task:
455
+ type: information-retrieval
456
+ name: Information Retrieval
457
+ dataset:
458
+ name: dim 64
459
+ type: dim_64
460
+ metrics:
461
+ - type: cosine_accuracy@1
462
+ value: 0.4475806451612903
463
+ name: Cosine Accuracy@1
464
+ - type: cosine_accuracy@3
465
+ value: 0.7661290322580645
466
+ name: Cosine Accuracy@3
467
+ - type: cosine_accuracy@5
468
+ value: 0.8790322580645161
469
+ name: Cosine Accuracy@5
470
+ - type: cosine_accuracy@10
471
+ value: 0.9475806451612904
472
+ name: Cosine Accuracy@10
473
+ - type: cosine_precision@1
474
+ value: 0.4475806451612903
475
+ name: Cosine Precision@1
476
+ - type: cosine_precision@3
477
+ value: 0.2553763440860215
478
+ name: Cosine Precision@3
479
+ - type: cosine_precision@5
480
+ value: 0.17580645161290326
481
+ name: Cosine Precision@5
482
+ - type: cosine_precision@10
483
+ value: 0.09475806451612903
484
+ name: Cosine Precision@10
485
+ - type: cosine_recall@1
486
+ value: 0.4475806451612903
487
+ name: Cosine Recall@1
488
+ - type: cosine_recall@3
489
+ value: 0.7661290322580645
490
+ name: Cosine Recall@3
491
+ - type: cosine_recall@5
492
+ value: 0.8790322580645161
493
+ name: Cosine Recall@5
494
+ - type: cosine_recall@10
495
+ value: 0.9475806451612904
496
+ name: Cosine Recall@10
497
+ - type: cosine_ndcg@10
498
+ value: 0.7052651530890945
499
+ name: Cosine Ndcg@10
500
+ - type: cosine_mrr@10
501
+ value: 0.6260768689196109
502
+ name: Cosine Mrr@10
503
+ - type: cosine_map@100
504
+ value: 0.6277483838406475
505
+ name: Cosine Map@100
506
+ ---
507
+
508
+ # BGE base Financial Matryoshka
509
+
510
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
511
+
512
+ ## Model Details
513
+
514
+ ### Model Description
515
+ - **Model Type:** Sentence Transformer
516
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
517
+ - **Maximum Sequence Length:** 512 tokens
518
+ - **Output Dimensionality:** 768 tokens
519
+ - **Similarity Function:** Cosine Similarity
520
+ <!-- - **Training Dataset:** Unknown -->
521
+ - **Language:** en
522
+ - **License:** apache-2.0
523
+
524
+ ### Model Sources
525
+
526
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
527
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
528
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
529
+
530
+ ### Full Model Architecture
531
+
532
+ ```
533
+ SentenceTransformer(
534
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
535
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
536
+ (2): Normalize()
537
+ )
538
+ ```
539
+
540
+ ## Usage
541
+
542
+ ### Direct Usage (Sentence Transformers)
543
+
544
+ First install the Sentence Transformers library:
545
+
546
+ ```bash
547
+ pip install -U sentence-transformers
548
+ ```
549
+
550
+ Then you can load this model and run inference.
551
+ ```python
552
+ from sentence_transformers import SentenceTransformer
553
+
554
+ # Download from the 🤗 Hub
555
+ model = SentenceTransformer("anishareddyalla/bge-base-matryoshka-aws-casestudies")
556
+ # Run inference
557
+ sentences = [
558
+ 'Alternatively, you can run the inference via code. Here is one example written in Python, using the requests library: import requests url = "https://<YOUR_API_GATEWAY_ENDPOINT_ID>. execute-api. <YOUR_ENDPOINT_REGION>. amazonaws. com/prod/question?question=\\"What is the color of my car now?\\"&context=\\"My car used to be blue but I painted red\\"" response = requests. request("GET", url, headers=headers, data=payload) print(response. text) The code outputs a string similar to the following: \'{"score":0. 6947233080863953,"start":38,"end":41,"answer":"red"}\' If you are interested in knowing more about deploying Generative AI and large language models on AWS, check out here: Deploy Serverless Generative AI on AWS Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large model inference containers Clean up Inside the root directory of your repository, run the following code to clean up your resources: make destroy Conclusion In this post, we introduced how you can use Lambda to deploy your trained ML model using your preferred web application framework, such as FastAPI. We provided a detailed code repository that you can deploy, and you retain the flexibility of switching to whichever trained model artifacts you process. The performance can depend on how you implement and deploy the model. You are welcome to try it out yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting the Nordics customers. She enjoys helping customers with the architecture, design, and development of cloud-optimized infrastructure solutions. She is specialized in AI and Machine Learning and is interested in empowering customers with intelligence in their AI/ML applications. In her spare time, she is also a part-time illustrator who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer from AWS based in Zurich, Switzerland. He engages with customers and helps them implement scalable and fully-functional ML applications. He is passionate about building and productionizing machine learning applications for customers and is always keen to explore around new trends and cutting-edge technologies in the AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments Resources Getting Started What\'s New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow Twitter Facebook LinkedIn Twitch Email Updates.',
559
+ 'How can Lambda be used to deploy trained ML models using a preferred web application framework?',
560
+ 'How has PayEye been able to accelerate their development process and enter the production phase within a few months using AWS services, and what impact has this had on their recruitment efforts and team focus?',
561
+ ]
562
+ embeddings = model.encode(sentences)
563
+ print(embeddings.shape)
564
+ # [3, 768]
565
+
566
+ # Get the similarity scores for the embeddings
567
+ similarities = model.similarity(embeddings, embeddings)
568
+ print(similarities.shape)
569
+ # [3, 3]
570
+ ```
571
+
572
+ <!--
573
+ ### Direct Usage (Transformers)
574
+
575
+ <details><summary>Click to see the direct usage in Transformers</summary>
576
+
577
+ </details>
578
+ -->
579
+
580
+ <!--
581
+ ### Downstream Usage (Sentence Transformers)
582
+
583
+ You can finetune this model on your own dataset.
584
+
585
+ <details><summary>Click to expand</summary>
586
+
587
+ </details>
588
+ -->
589
+
590
+ <!--
591
+ ### Out-of-Scope Use
592
+
593
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
594
+ -->
595
+
596
+ ## Evaluation
597
+
598
+ ### Metrics
599
+
600
+ #### Information Retrieval
601
+ * Dataset: `dim_768`
602
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
603
+
604
+ | Metric | Value |
605
+ |:--------------------|:-----------|
606
+ | cosine_accuracy@1 | 0.5121 |
607
+ | cosine_accuracy@3 | 0.8266 |
608
+ | cosine_accuracy@5 | 0.9234 |
609
+ | cosine_accuracy@10 | 0.9637 |
610
+ | cosine_precision@1 | 0.5121 |
611
+ | cosine_precision@3 | 0.2755 |
612
+ | cosine_precision@5 | 0.1847 |
613
+ | cosine_precision@10 | 0.0964 |
614
+ | cosine_recall@1 | 0.5121 |
615
+ | cosine_recall@3 | 0.8266 |
616
+ | cosine_recall@5 | 0.9234 |
617
+ | cosine_recall@10 | 0.9637 |
618
+ | cosine_ndcg@10 | 0.7539 |
619
+ | cosine_mrr@10 | 0.6844 |
620
+ | **cosine_map@100** | **0.6859** |
621
+
622
+ #### Information Retrieval
623
+ * Dataset: `dim_512`
624
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
625
+
626
+ | Metric | Value |
627
+ |:--------------------|:-----------|
628
+ | cosine_accuracy@1 | 0.5323 |
629
+ | cosine_accuracy@3 | 0.8226 |
630
+ | cosine_accuracy@5 | 0.9194 |
631
+ | cosine_accuracy@10 | 0.9677 |
632
+ | cosine_precision@1 | 0.5323 |
633
+ | cosine_precision@3 | 0.2742 |
634
+ | cosine_precision@5 | 0.1839 |
635
+ | cosine_precision@10 | 0.0968 |
636
+ | cosine_recall@1 | 0.5323 |
637
+ | cosine_recall@3 | 0.8226 |
638
+ | cosine_recall@5 | 0.9194 |
639
+ | cosine_recall@10 | 0.9677 |
640
+ | cosine_ndcg@10 | 0.7597 |
641
+ | cosine_mrr@10 | 0.6913 |
642
+ | **cosine_map@100** | **0.6924** |
643
+
644
+ #### Information Retrieval
645
+ * Dataset: `dim_256`
646
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
647
+
648
+ | Metric | Value |
649
+ |:--------------------|:-----------|
650
+ | cosine_accuracy@1 | 0.5242 |
651
+ | cosine_accuracy@3 | 0.8226 |
652
+ | cosine_accuracy@5 | 0.9194 |
653
+ | cosine_accuracy@10 | 0.9597 |
654
+ | cosine_precision@1 | 0.5242 |
655
+ | cosine_precision@3 | 0.2742 |
656
+ | cosine_precision@5 | 0.1839 |
657
+ | cosine_precision@10 | 0.096 |
658
+ | cosine_recall@1 | 0.5242 |
659
+ | cosine_recall@3 | 0.8226 |
660
+ | cosine_recall@5 | 0.9194 |
661
+ | cosine_recall@10 | 0.9597 |
662
+ | cosine_ndcg@10 | 0.7528 |
663
+ | cosine_mrr@10 | 0.6846 |
664
+ | **cosine_map@100** | **0.6863** |
665
+
666
+ #### Information Retrieval
667
+ * Dataset: `dim_128`
668
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
669
+
670
+ | Metric | Value |
671
+ |:--------------------|:-----------|
672
+ | cosine_accuracy@1 | 0.496 |
673
+ | cosine_accuracy@3 | 0.7903 |
674
+ | cosine_accuracy@5 | 0.8911 |
675
+ | cosine_accuracy@10 | 0.9556 |
676
+ | cosine_precision@1 | 0.496 |
677
+ | cosine_precision@3 | 0.2634 |
678
+ | cosine_precision@5 | 0.1782 |
679
+ | cosine_precision@10 | 0.0956 |
680
+ | cosine_recall@1 | 0.496 |
681
+ | cosine_recall@3 | 0.7903 |
682
+ | cosine_recall@5 | 0.8911 |
683
+ | cosine_recall@10 | 0.9556 |
684
+ | cosine_ndcg@10 | 0.7338 |
685
+ | cosine_mrr@10 | 0.6613 |
686
+ | **cosine_map@100** | **0.6631** |
687
+
688
+ #### Information Retrieval
689
+ * Dataset: `dim_64`
690
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
691
+
692
+ | Metric | Value |
693
+ |:--------------------|:-----------|
694
+ | cosine_accuracy@1 | 0.4476 |
695
+ | cosine_accuracy@3 | 0.7661 |
696
+ | cosine_accuracy@5 | 0.879 |
697
+ | cosine_accuracy@10 | 0.9476 |
698
+ | cosine_precision@1 | 0.4476 |
699
+ | cosine_precision@3 | 0.2554 |
700
+ | cosine_precision@5 | 0.1758 |
701
+ | cosine_precision@10 | 0.0948 |
702
+ | cosine_recall@1 | 0.4476 |
703
+ | cosine_recall@3 | 0.7661 |
704
+ | cosine_recall@5 | 0.879 |
705
+ | cosine_recall@10 | 0.9476 |
706
+ | cosine_ndcg@10 | 0.7053 |
707
+ | cosine_mrr@10 | 0.6261 |
708
+ | **cosine_map@100** | **0.6277** |
709
+
710
+ <!--
711
+ ## Bias, Risks and Limitations
712
+
713
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
714
+ -->
715
+
716
+ <!--
717
+ ### Recommendations
718
+
719
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
720
+ -->
721
+
722
+ ## Training Details
723
+
724
+ ### Training Dataset
725
+
726
+ #### Unnamed Dataset
727
+
728
+
729
+ * Size: 2,231 training samples
730
+ * Columns: <code>positive</code> and <code>anchor</code>
731
+ * Approximate statistics based on the first 1000 samples:
732
+ | | positive | anchor |
733
+ |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
734
+ | type | string | string |
735
+ | details | <ul><li>min: 3 tokens</li><li>mean: 430.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 33.49 tokens</li><li>max: 65 tokens</li></ul> |
736
+ * Samples:
737
+ | positive | anchor |
738
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
739
+ | <code>TCSG is helping students enter a competitive workforce as educated cloud professionals and providing opportunities for success. TCSG built its Cloud Academy using AWS Academy, which provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. TCSG launched the TCSG Cloud Academy in two forms: one as a specialization within an existing associate’s degree and the second as a stand-alone technical certificate of credit. For the technical certificate of credit, students who have existing degrees can enter the curriculum to focus on cloud computing and participate in hands-on cloud experiences using AWS services. Tiếng Việt Italiano ไทย The Technical College System of Georgia is the state government agency that supervises workforce development of more than 294,000 students across 22 technical colleges, 88 campuses, and more than 600 programs. Using the AWS curriculum and technology as the foundation for its courses, TCSG is preparing students to earn industry-recognized AWS Certifications to increase employability while improving accessibility to cloud education by offering the academy virtually and remotely. Learn more » TCSG is the state of Georgia government agency that supervises workforce development of hundreds of thousands of students across 22 technical colleges, 88 campuses, and more than 600 programs. The agency aims to run a system of technical education using the latest technology that’s accessible to all adults and corporate citizens in the state. To develop and deploy its new cloud-focused curriculum, it worked with AWS Education Programs, which helps TCSG institutions develop initiatives that align education to careers in the cloud and promote student employability, preparing diverse learners for in-demand cloud roles around the world. In 2020, the organization officially launched the TCSG Cloud Academy—a virtual program for students pursuing expertise and certifications in cloud computing—on its eCampus virtual learning system. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português.</code> | <code>How has the use of AWS Academy by TCSG helped prepare students for pursuing industry-recognized certifications and in-demand cloud jobs in Georgia's workforce?</code> |
740
+ | <code>This prompt is then provided to the LLM for generating an answer to the user question. @router. post("/rag") async def rag_handler(req: Request) -> Dict[str, Any]: # dump the received request for debugging purposes logger. info(f"req={req}") # initialize vector db and SageMaker Endpoint _init(req) # Use the vector db to find similar documents to the query # the vector db call would automatically convert the query text # into embeddings docs = _vector_db. similarity_search(req. q, k=req. max_matching_docs) logger. info(f"here are the {req. max_matching_docs} closest matching docs to the query=\"{req. q}\"") for d in docs: logger. info(f"---------") logger. info(d) logger. info(f"---------") # now that we have the matching docs, lets pack them as a context # into the prompt and ask the LLM to generate a response prompt_template = """Answer based on context:\n\n{context}\n\n{question}""" prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) logger. info(f"prompt sent to llm = \"{prompt}\"") chain = load_qa_chain(llm=_sm_llm, prompt=prompt) answer = chain({"input_documents": docs, "question": req. q}, return_only_outputs=True)['output_text'] logger. info(f"answer received from llm,\nquestion: \"{req. q}\"\nanswer: \"{answer}\"") resp = {'question': req. q, 'answer': answer} if req. verbose is True: resp['docs'] = docs return resp Clean up To avoid incurring future charges, delete the resources. You can do this by deleting the CloudFormation stack as shown in the following screenshot.</code> | <code>What resources need to be deleted to avoid future charges, and how can they be deleted?</code> |
741
+ | <code>append(input_1_s3_location) async_response = base_model_predictor. predict_async(input_path=input_1_s3_location) output_locations. append(async_response. output_path) if i > max_images: break This may take up to 30 minutes or more depending on how much data you have uploaded for asynchronous inference. You can visualize one of these inferences as follows: plot_response('data/single. out') Convert the asynchronous inference output to a Ground Truth input manifest In this step, we create an input manifest for a bounding box verification job on Ground Truth. We upload the Ground Truth UI template and label categories file, and create the verification job. The notebook linked to this post uses a private workforce to perform the labeling; you can change this if you’re using other types of workforces. For more details, refer to the full code in the notebook. Verify labels from the auto-labeling process in Ground Truth In this step, we complete the verification by accessing the labeling portal. For more details, refer to here. When you access the portal as a workforce member, you will be able to see the bounding boxes created by the JumpStart model and make adjustments as required. You can use this template to repeat auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. Clean up In this step, we clean up by deleting the endpoint and the model created in previous steps: # Delete the SageMaker endpoint base_model_predictor. delete_model() base_model_predictor. delete_endpoint() Conclusion In this post, we walked through an auto-labeling process involving JumpStart and asynchronous inference. We used the results of the auto-labeling process to convert and visualize labeled data on a real-world dataset. You can use the solution to perform auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. You can also explore using tools like the Segment Anything Model for generating segment masks as part of the auto-labeling process. In future posts in this series, we will cover the perception module and segmentation.</code> | <code>How can you visualize the inferences generated by the asynchronous inference process using the provided solution?</code> |
742
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
743
+ ```json
744
+ {
745
+ "loss": "MultipleNegativesRankingLoss",
746
+ "matryoshka_dims": [
747
+ 768,
748
+ 512,
749
+ 256,
750
+ 128,
751
+ 64
752
+ ],
753
+ "matryoshka_weights": [
754
+ 1,
755
+ 1,
756
+ 1,
757
+ 1,
758
+ 1
759
+ ],
760
+ "n_dims_per_step": -1
761
+ }
762
+ ```
763
+
764
+ ### Training Hyperparameters
765
+ #### Non-Default Hyperparameters
766
+
767
+ - `eval_strategy`: epoch
768
+ - `per_device_train_batch_size`: 32
769
+ - `per_device_eval_batch_size`: 16
770
+ - `gradient_accumulation_steps`: 16
771
+ - `learning_rate`: 2e-05
772
+ - `num_train_epochs`: 4
773
+ - `lr_scheduler_type`: cosine
774
+ - `warmup_ratio`: 0.1
775
+ - `bf16`: True
776
+ - `tf32`: True
777
+ - `load_best_model_at_end`: True
778
+ - `optim`: adamw_torch_fused
779
+ - `batch_sampler`: no_duplicates
780
+
781
+ #### All Hyperparameters
782
+ <details><summary>Click to expand</summary>
783
+
784
+ - `overwrite_output_dir`: False
785
+ - `do_predict`: False
786
+ - `eval_strategy`: epoch
787
+ - `prediction_loss_only`: True
788
+ - `per_device_train_batch_size`: 32
789
+ - `per_device_eval_batch_size`: 16
790
+ - `per_gpu_train_batch_size`: None
791
+ - `per_gpu_eval_batch_size`: None
792
+ - `gradient_accumulation_steps`: 16
793
+ - `eval_accumulation_steps`: None
794
+ - `learning_rate`: 2e-05
795
+ - `weight_decay`: 0.0
796
+ - `adam_beta1`: 0.9
797
+ - `adam_beta2`: 0.999
798
+ - `adam_epsilon`: 1e-08
799
+ - `max_grad_norm`: 1.0
800
+ - `num_train_epochs`: 4
801
+ - `max_steps`: -1
802
+ - `lr_scheduler_type`: cosine
803
+ - `lr_scheduler_kwargs`: {}
804
+ - `warmup_ratio`: 0.1
805
+ - `warmup_steps`: 0
806
+ - `log_level`: passive
807
+ - `log_level_replica`: warning
808
+ - `log_on_each_node`: True
809
+ - `logging_nan_inf_filter`: True
810
+ - `save_safetensors`: True
811
+ - `save_on_each_node`: False
812
+ - `save_only_model`: False
813
+ - `restore_callback_states_from_checkpoint`: False
814
+ - `no_cuda`: False
815
+ - `use_cpu`: False
816
+ - `use_mps_device`: False
817
+ - `seed`: 42
818
+ - `data_seed`: None
819
+ - `jit_mode_eval`: False
820
+ - `use_ipex`: False
821
+ - `bf16`: True
822
+ - `fp16`: False
823
+ - `fp16_opt_level`: O1
824
+ - `half_precision_backend`: auto
825
+ - `bf16_full_eval`: False
826
+ - `fp16_full_eval`: False
827
+ - `tf32`: True
828
+ - `local_rank`: 0
829
+ - `ddp_backend`: None
830
+ - `tpu_num_cores`: None
831
+ - `tpu_metrics_debug`: False
832
+ - `debug`: []
833
+ - `dataloader_drop_last`: False
834
+ - `dataloader_num_workers`: 0
835
+ - `dataloader_prefetch_factor`: None
836
+ - `past_index`: -1
837
+ - `disable_tqdm`: False
838
+ - `remove_unused_columns`: True
839
+ - `label_names`: None
840
+ - `load_best_model_at_end`: True
841
+ - `ignore_data_skip`: False
842
+ - `fsdp`: []
843
+ - `fsdp_min_num_params`: 0
844
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
845
+ - `fsdp_transformer_layer_cls_to_wrap`: None
846
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
847
+ - `deepspeed`: None
848
+ - `label_smoothing_factor`: 0.0
849
+ - `optim`: adamw_torch_fused
850
+ - `optim_args`: None
851
+ - `adafactor`: False
852
+ - `group_by_length`: False
853
+ - `length_column_name`: length
854
+ - `ddp_find_unused_parameters`: None
855
+ - `ddp_bucket_cap_mb`: None
856
+ - `ddp_broadcast_buffers`: False
857
+ - `dataloader_pin_memory`: True
858
+ - `dataloader_persistent_workers`: False
859
+ - `skip_memory_metrics`: True
860
+ - `use_legacy_prediction_loop`: False
861
+ - `push_to_hub`: False
862
+ - `resume_from_checkpoint`: None
863
+ - `hub_model_id`: None
864
+ - `hub_strategy`: every_save
865
+ - `hub_private_repo`: False
866
+ - `hub_always_push`: False
867
+ - `gradient_checkpointing`: False
868
+ - `gradient_checkpointing_kwargs`: None
869
+ - `include_inputs_for_metrics`: False
870
+ - `eval_do_concat_batches`: True
871
+ - `fp16_backend`: auto
872
+ - `push_to_hub_model_id`: None
873
+ - `push_to_hub_organization`: None
874
+ - `mp_parameters`:
875
+ - `auto_find_batch_size`: False
876
+ - `full_determinism`: False
877
+ - `torchdynamo`: None
878
+ - `ray_scope`: last
879
+ - `ddp_timeout`: 1800
880
+ - `torch_compile`: False
881
+ - `torch_compile_backend`: None
882
+ - `torch_compile_mode`: None
883
+ - `dispatch_batches`: None
884
+ - `split_batches`: None
885
+ - `include_tokens_per_second`: False
886
+ - `include_num_input_tokens_seen`: False
887
+ - `neftune_noise_alpha`: None
888
+ - `optim_target_modules`: None
889
+ - `batch_eval_metrics`: False
890
+ - `eval_on_start`: False
891
+ - `batch_sampler`: no_duplicates
892
+ - `multi_dataset_batch_sampler`: proportional
893
+
894
+ </details>
895
+
896
+ ### Training Logs
897
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
898
+ |:----------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
899
+ | 0.9143 | 4 | - | 0.6663 | 0.6851 | 0.7027 | 0.6120 | 0.6998 |
900
+ | **1.8286** | **8** | **-** | **0.6758** | **0.6822** | **0.6966** | **0.6311** | **0.6941** |
901
+ | 2.2857 | 10 | 1.883 | - | - | - | - | - |
902
+ | 2.9714 | 13 | - | 0.6631 | 0.6881 | 0.6904 | 0.6245 | 0.6873 |
903
+ | 3.6571 | 16 | - | 0.6631 | 0.6863 | 0.6924 | 0.6277 | 0.6859 |
904
+
905
+ * The bold row denotes the saved checkpoint.
906
+
907
+ ### Framework Versions
908
+ - Python: 3.10.12
909
+ - Sentence Transformers: 3.0.1
910
+ - Transformers: 4.42.4
911
+ - PyTorch: 2.3.1+cu121
912
+ - Accelerate: 0.32.1
913
+ - Datasets: 2.20.0
914
+ - Tokenizers: 0.19.1
915
+
916
+ ## Citation
917
+
918
+ ### BibTeX
919
+
920
+ #### Sentence Transformers
921
+ ```bibtex
922
+ @inproceedings{reimers-2019-sentence-bert,
923
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
924
+ author = "Reimers, Nils and Gurevych, Iryna",
925
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
926
+ month = "11",
927
+ year = "2019",
928
+ publisher = "Association for Computational Linguistics",
929
+ url = "https://arxiv.org/abs/1908.10084",
930
+ }
931
+ ```
932
+
933
+ #### MatryoshkaLoss
934
+ ```bibtex
935
+ @misc{kusupati2024matryoshka,
936
+ title={Matryoshka Representation Learning},
937
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
938
+ year={2024},
939
+ eprint={2205.13147},
940
+ archivePrefix={arXiv},
941
+ primaryClass={cs.LG}
942
+ }
943
+ ```
944
+
945
+ #### MultipleNegativesRankingLoss
946
+ ```bibtex
947
+ @misc{henderson2017efficient,
948
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
949
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
950
+ year={2017},
951
+ eprint={1705.00652},
952
+ archivePrefix={arXiv},
953
+ primaryClass={cs.CL}
954
+ }
955
+ ```
956
+
957
+ <!--
958
+ ## Glossary
959
+
960
+ *Clearly define terms in order to be accessible across audiences.*
961
+ -->
962
+
963
+ <!--
964
+ ## Model Card Authors
965
+
966
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
967
+ -->
968
+
969
+ <!--
970
+ ## Model Card Contact
971
+
972
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
973
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.42.4",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.4",
5
+ "pytorch": "2.3.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76ba3669d42df79ec15b2097161615713d7cf0dc3d3f52bdc702e5d2648e4a99
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff