danicafisher commited on
Commit
3d39bc8
1 Parent(s): bfa730d

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,780 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/all-MiniLM-L6-v2
3
+ library_name: sentence-transformers
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:128
11
+ - loss:MultipleNegativesRankingLoss
12
+ widget:
13
+ - source_sentence: What are the implications of large language models potentially
14
+ deceiving their users under pressure, as discussed in the technical report by
15
+ Scheurer et al (2023)?
16
+ sentences:
17
+ - "48 \n• Data protection \n• Data retention \n• Consistency in use of defining\
18
+ \ key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \
19
+ \ \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n\
20
+ • Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV\
21
+ \ practices \n• Secure software development practices \n• Stakeholder engagement\
22
+ \ \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower\
23
+ \ protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing\
24
+ \ acceptable use policies and guidance for the use of GAI in formal human-AI teaming\
25
+ \ settings \nas well as different levels of human-AI configurations can help to\
26
+ \ decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment\
27
+ \ between systems and users. These practices are just \none example of adapting\
28
+ \ existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations\
29
+ \ \nOrganizations may seek to acquire, embed, incorporate, or use open-source\
30
+ \ or proprietary third-party \nGAI models, systems, or generated data for various\
31
+ \ applications across an enterprise. Use of these GAI \ntools and inputs has implications\
32
+ \ for all functions of the organization – including but not limited to \nacquisition,\
33
+ \ human resources, legal, compliance, and IT services – regardless of whether\
34
+ \ they are carried \nout by employees or third parties. Many of the actions cited\
35
+ \ above are relevant and options for \naddressing third-party considerations.\
36
+ \ \nThird party GAI integrations may give rise to increased intellectual property,\
37
+ \ data privacy, or information \nsecurity risks, pointing to the need for clear\
38
+ \ guidelines for transparency and risk management regarding \nthe collection and\
39
+ \ use of third-party data for model inputs. Organizations may consider varying\
40
+ \ risk \ncontrols for foundation models, fine-tuned models, and embedded tools,\
41
+ \ enhanced processes for \ninteracting with external GAI technologies or service\
42
+ \ providers. Organizations can apply standard or \nexisting risk controls and\
43
+ \ processes to proprietary or open-source GAI technologies, data, and third-party\
44
+ \ \nservice providers, including acquisition and procurement due diligence, requests\
45
+ \ for software bills of \nmaterials (SBOMs), application of service level agreements\
46
+ \ (SLAs), and statement on standards for \nattestation engagement (SSAE) reports\
47
+ \ to help with third-party transparency and risk management for \nGAI systems.\
48
+ \ \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in\
49
+ \ which GAI systems may be developed, used, and repurposed \ncomplicates risk\
50
+ \ mapping and pre-deployment measurement efforts. Robust test, evaluation, validation,\
51
+ \ \nand verification (TEVV) processes can be iteratively applied – and documented\
52
+ \ – in early stages of the AI \nlifecycle and informed by representative AI Actors\
53
+ \ (see Figure 3 of the AI RMF). Until new and rigorous"
54
+ - "21 \nGV-6.1-005 \nImplement a use-cased based supplier risk assessment framework\
55
+ \ to evaluate and \nmonitor third-party entities’ performance and adherence to\
56
+ \ content provenance \nstandards and technologies to detect anomalies and unauthorized\
57
+ \ changes; \nservices acquisition and value chain risk management; and legal compliance.\
58
+ \ \nData Privacy; Information \nIntegrity; Information Security; \nIntellectual\
59
+ \ Property; Value Chain \nand Component Integration \nGV-6.1-006 Include clauses\
60
+ \ in contracts which allow an organization to evaluate third-party \nGAI processes\
61
+ \ and standards. \nInformation Integrity \nGV-6.1-007 Inventory all third-party\
62
+ \ entities with access to organizational content and \nestablish approved GAI\
63
+ \ technology and service provider lists. \nValue Chain and Component \nIntegration\
64
+ \ \nGV-6.1-008 Maintain records of changes to content made by third parties to\
65
+ \ promote content \nprovenance, including sources, timestamps, metadata. \nInformation\
66
+ \ Integrity; Value Chain \nand Component Integration; \nIntellectual Property\
67
+ \ \nGV-6.1-009 \nUpdate and integrate due diligence processes for GAI acquisition\
68
+ \ and \nprocurement vendor assessments to include intellectual property, data\
69
+ \ privacy, \nsecurity, and other risks. For example, update processes to: Address\
70
+ \ solutions that \nmay rely on embedded GAI technologies; Address ongoing monitoring,\
71
+ \ \nassessments, and alerting, dynamic risk assessments, and real-time reporting\
72
+ \ \ntools for monitoring third-party GAI risks; Consider policy adjustments across\
73
+ \ GAI \nmodeling libraries, tools and APIs, fine-tuned models, and embedded tools;\
74
+ \ \nAssess GAI vendors, open-source or proprietary GAI tools, or GAI service \n\
75
+ providers against incident or vulnerability databases. \nData Privacy; Human-AI\
76
+ \ \nConfiguration; Information \nSecurity; Intellectual Property; \nValue Chain\
77
+ \ and Component \nIntegration; Harmful Bias and \nHomogenization \nGV-6.1-010\
78
+ \ \nUpdate GAI acceptable use policies to address proprietary and open-source\
79
+ \ GAI \ntechnologies and data, and contractors, consultants, and other third-party\
80
+ \ \npersonnel. \nIntellectual Property; Value Chain \nand Component Integration\
81
+ \ \nAI Actor Tasks: Operation and Monitoring, Procurement, Third-party entities\
82
+ \ \n \nGOVERN 6.2: Contingency processes are in place to handle failures or incidents\
83
+ \ in third-party data or AI systems deemed to be \nhigh-risk. \nAction ID \nSuggested\
84
+ \ Action \nGAI Risks \nGV-6.2-001 \nDocument GAI risks associated with system\
85
+ \ value chain to identify over-reliance \non third-party data and to identify\
86
+ \ fallbacks. \nValue Chain and Component \nIntegration \nGV-6.2-002 \nDocument\
87
+ \ incidents involving third-party GAI data and systems, including open-\ndata\
88
+ \ and open-source software. \nIntellectual Property; Value Chain \nand Component\
89
+ \ Integration"
90
+ - "58 \nSatariano, A. et al. (2023) The People Onscreen Are Fake. The Disinformation\
91
+ \ Is Real. New York Times. \nhttps://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html\
92
+ \ \nSchaul, K. et al. (2024) Inside the secret list of websites that make AI like\
93
+ \ ChatGPT sound smart. \nWashington Post. https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/\
94
+ \ \nScheurer, J. et al. (2023) Technical report: Large language models can strategically\
95
+ \ deceive their users \nwhen put under pressure. arXiv. https://arxiv.org/abs/2311.07590\
96
+ \ \nShelby, R. et al. (2023) Sociotechnical Harms of Algorithmic Systems: Scoping\
97
+ \ a Taxonomy for Harm \nReduction. arXiv. https://arxiv.org/pdf/2210.05791 \n\
98
+ Shevlane, T. et al. (2023) Model evaluation for extreme risks. arXiv. https://arxiv.org/pdf/2305.15324\
99
+ \ \nShumailov, I. et al. (2023) The curse of recursion: training on generated\
100
+ \ data makes models forget. arXiv. \nhttps://arxiv.org/pdf/2305.17493v2 \nSmith,\
101
+ \ A. et al. (2023) Hallucination or Confabulation? Neuroanatomy as metaphor in\
102
+ \ Large Language \nModels. PLOS Digital Health. \nhttps://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000388\
103
+ \ \nSoice, E. et al. (2023) Can large language models democratize access to dual-use\
104
+ \ biotechnology? arXiv. \nhttps://arxiv.org/abs/2306.03809 \nSolaiman, I. et al.\
105
+ \ (2023) The Gradient of Generative AI Release: Methods and Considerations. arXiv.\
106
+ \ \nhttps://arxiv.org/abs/2302.04844 \nStaab, R. et al. (2023) Beyond Memorization:\
107
+ \ Violating Privacy via Inference With Large Language \nModels. arXiv. https://arxiv.org/pdf/2310.07298\
108
+ \ \nStanford, S. et al. (2023) Whose Opinions Do Language Models Reflect? arXiv.\
109
+ \ \nhttps://arxiv.org/pdf/2303.17548 \nStrubell, E. et al. (2019) Energy and Policy\
110
+ \ Considerations for Deep Learning in NLP. arXiv. \nhttps://arxiv.org/pdf/1906.02243\
111
+ \ \nThe White House (2016) Circular No. A-130, Managing Information as a Strategic\
112
+ \ Resource. \nhttps://www.whitehouse.gov/wp-\ncontent/uploads/legacy_drupal_files/omb/circulars/A130/a130revised.pdf\
113
+ \ \nThe White House (2023) Executive Order on the Safe, Secure, and Trustworthy\
114
+ \ Development and Use of \nArtificial Intelligence. https://www.whitehouse.gov/briefing-room/presidential-\n\
115
+ actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-\n\
116
+ artificial-intelligence/ \nThe White House (2022) Roadmap for Researchers on Priorities\
117
+ \ Related to Information Integrity \nResearch and Development. https://www.whitehouse.gov/wp-content/uploads/2022/12/Roadmap-\n\
118
+ Information-Integrity-RD-2022.pdf? \nThiel, D. (2023) Investigation Finds AI Image\
119
+ \ Generation Models Trained on Child Abuse. Stanford Cyber \nPolicy Center. https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-\n\
120
+ trained-child-abuse"
121
+ - source_sentence: How should human subjects be informed about their options to withdraw
122
+ participation or revoke consent in GAI applications?
123
+ sentences:
124
+ - "39 \nMS-3.3-004 \nProvide input for training materials about the capabilities\
125
+ \ and limitations of GAI \nsystems related to digital content transparency for\
126
+ \ AI Actors, other \nprofessionals, and the public about the societal impacts\
127
+ \ of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI\
128
+ \ Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \n\
129
+ MS-3.3-005 \nRecord and integrate structured feedback about content provenance\
130
+ \ from \noperators, users, and potentially impacted communities through the use\
131
+ \ of \nmethods such as user research studies, focus groups, or community forums.\
132
+ \ \nActively seek feedback on generated content quality and potential biases.\
133
+ \ \nAssess the general awareness among end users and impacted communities \nabout\
134
+ \ the availability of these feedback channels. \nHuman-AI Configuration; \nInformation\
135
+ \ Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment,\
136
+ \ Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV\
137
+ \ \n \nMEASURE 4.2: Measurement results regarding AI system trustworthiness in\
138
+ \ deployment context(s) and across the AI lifecycle are \ninformed by input from\
139
+ \ domain experts and relevant AI Actors to validate whether the system is performing\
140
+ \ consistently as \nintended. Results are documented. \nAction ID \nSuggested\
141
+ \ Action \nGAI Risks \nMS-4.2-001 \nConduct adversarial testing at a regular cadence\
142
+ \ to map and measure GAI risks, \nincluding tests to address attempts to deceive\
143
+ \ or manipulate the application of \nprovenance techniques or other misuses. Identify\
144
+ \ vulnerabilities and \nunderstand potential misuse scenarios and unintended outputs.\
145
+ \ \nInformation Integrity; Information \nSecurity \nMS-4.2-002 \nEvaluate GAI\
146
+ \ system performance in real-world scenarios to observe its \nbehavior in practical\
147
+ \ environments and reveal issues that might not surface in \ncontrolled and optimized\
148
+ \ testing environments. \nHuman-AI Configuration; \nConfabulation; Information\
149
+ \ \nSecurity \nMS-4.2-003 \nImplement interpretability and explainability methods\
150
+ \ to evaluate GAI system \ndecisions and verify alignment with intended purpose.\
151
+ \ \nInformation Integrity; Harmful Bias \nand Homogenization \nMS-4.2-004 \nMonitor\
152
+ \ and document instances where human operators or other systems \noverride the\
153
+ \ GAI's decisions. Evaluate these cases to understand if the overrides \nare linked\
154
+ \ to issues related to content provenance. \nInformation Integrity \nMS-4.2-005\
155
+ \ \nVerify and document the incorporation of results of structured public feedback\
156
+ \ \nexercises into design, implementation, deployment approval (“go”/“no-go” \n\
157
+ decisions), monitoring, and decommission decisions. \nHuman-AI Configuration; \n\
158
+ Information Security \nAI Actor Tasks: AI Deployment, Domain Experts, End-Users,\
159
+ \ Operation and Monitoring, TEVV"
160
+ - "30 \nMEASURE 2.2: Evaluations involving human subjects meet applicable requirements\
161
+ \ (including human subject protection) and are \nrepresentative of the relevant\
162
+ \ population. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.2-001 Assess and\
163
+ \ manage statistical biases related to GAI content provenance through \ntechniques\
164
+ \ such as re-sampling, re-weighting, or adversarial training. \nInformation Integrity;\
165
+ \ Information \nSecurity; Harmful Bias and \nHomogenization \nMS-2.2-002 \nDocument\
166
+ \ how content provenance data is tracked and how that data interacts \nwith privacy\
167
+ \ and security. Consider: Anonymizing data to protect the privacy of \nhuman subjects;\
168
+ \ Leveraging privacy output filters; Removing any personally \nidentifiable information\
169
+ \ (PII) to prevent potential harm or misuse. \nData Privacy; Human AI \nConfiguration;\
170
+ \ Information \nIntegrity; Information Security; \nDangerous, Violent, or Hateful\
171
+ \ \nContent \nMS-2.2-003 Provide human subjects with options to withdraw participation\
172
+ \ or revoke their \nconsent for present or future use of their data in GAI applications.\
173
+ \ \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity \nMS-2.2-004\
174
+ \ \nUse techniques such as anonymization, differential privacy or other privacy-\n\
175
+ enhancing technologies to minimize the risks associated with linking AI-generated\
176
+ \ \ncontent back to individual human subjects. \nData Privacy; Human-AI \nConfiguration\
177
+ \ \nAI Actor Tasks: AI Development, Human Factors, TEVV \n \nMEASURE 2.3: AI system\
178
+ \ performance or assurance criteria are measured qualitatively or quantitatively\
179
+ \ and demonstrated for \nconditions similar to deployment setting(s). Measures\
180
+ \ are documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.3-001 Consider\
181
+ \ baseline model performance on suites of benchmarks when selecting a \nmodel\
182
+ \ for fine tuning or enhancement with retrieval-augmented generation. \nInformation\
183
+ \ Security; \nConfabulation \nMS-2.3-002 Evaluate claims of model capabilities\
184
+ \ using empirically validated methods. \nConfabulation; Information \nSecurity\
185
+ \ \nMS-2.3-003 Share results of pre-deployment testing with relevant GAI Actors,\
186
+ \ such as those \nwith system release approval authority. \nHuman-AI Configuration"
187
+ - "36 \nMEASURE 2.11: Fairness and bias – as identified in the MAP function – are\
188
+ \ evaluated and results are documented. \nAction ID \nSuggested Action \nGAI Risks\
189
+ \ \nMS-2.11-001 \nApply use-case appropriate benchmarks (e.g., Bias Benchmark\
190
+ \ Questions, Real \nHateful or Harmful Prompts, Winogender Schemas15) to quantify\
191
+ \ systemic bias, \nstereotyping, denigration, and hateful content in GAI system\
192
+ \ outputs; \nDocument assumptions and limitations of benchmarks, including any\
193
+ \ actual or \npossible training/test data cross contamination, relative to in-context\
194
+ \ \ndeployment environment. \nHarmful Bias and Homogenization \nMS-2.11-002 \n\
195
+ Conduct fairness assessments to measure systemic bias. Measure GAI system \nperformance\
196
+ \ across demographic groups and subgroups, addressing both \nquality of service\
197
+ \ and any allocation of services and resources. Quantify harms \nusing: field testing\
198
+ \ with sub-group populations to determine likelihood of \nexposure to generated\
199
+ \ content exhibiting harmful bias, AI red-teaming with \ncounterfactual and low-context\
200
+ \ (e.g., “leader,” “bad guys”) prompts. For ML \npipelines or business processes\
201
+ \ with categorical or numeric outcomes that rely \non GAI, apply general fairness\
202
+ \ metrics (e.g., demographic parity, equalized odds, \nequal opportunity, statistical\
203
+ \ hypothesis tests), to the pipeline or business \noutcome where appropriate;\
204
+ \ Custom, context-specific metrics developed in \ncollaboration with domain experts\
205
+ \ and affected communities; Measurements of \nthe prevalence of denigration in\
206
+ \ generated content in deployment (e.g., sub-\nsampling a fraction of traffic and\
207
+ \ manually annotating denigrating content). \nHarmful Bias and Homogenization;\
208
+ \ \nDangerous, Violent, or Hateful \nContent \nMS-2.11-003 \nIdentify the classes\
209
+ \ of individuals, groups, or environmental ecosystems which \nmight be impacted\
210
+ \ by GAI systems through direct engagement with potentially \nimpacted communities.\
211
+ \ \nEnvironmental; Harmful Bias and \nHomogenization \nMS-2.11-004 \nReview, document,\
212
+ \ and measure sources of bias in GAI training and TEVV data: \nDifferences in distributions\
213
+ \ of outcomes across and within groups, including \nintersecting groups; Completeness,\
214
+ \ representativeness, and balance of data \nsources; demographic group and subgroup\
215
+ \ coverage in GAI system training \ndata; Forms of latent systemic bias in images,\
216
+ \ text, audio, embeddings, or other \ncomplex or unstructured data; Input data\
217
+ \ features that may serve as proxies for \ndemographic group membership (i.e.,\
218
+ \ image metadata, language dialect) or \notherwise give rise to emergent bias\
219
+ \ within GAI systems; The extent to which \nthe digital divide may negatively\
220
+ \ impact representativeness in GAI system \ntraining and TEVV data; Filtering\
221
+ \ of hate speech or content in GAI system \ntraining data; Prevalence of GAI-generated\
222
+ \ data in GAI system training data. \nHarmful Bias and Homogenization \n \n \n\
223
+ 15 Winogender Schemas is a sample set of paired sentences which differ only by\
224
+ \ gender of the pronouns used, \nwhich can be used to evaluate gender bias in\
225
+ \ natural language processing coreference resolution systems."
226
+ - source_sentence: What is the title of the NIST publication related to Artificial
227
+ Intelligence Risk Management?
228
+ sentences:
229
+ - "53 \nDocumenting, reporting, and sharing information about GAI incidents can\
230
+ \ help mitigate and prevent \nharmful outcomes by assisting relevant AI Actors\
231
+ \ in tracing impacts to their source. Greater awareness \nand standardization\
232
+ \ of GAI incident reporting could promote this transparency and improve GAI risk\
233
+ \ \nmanagement across the AI ecosystem. \nDocumentation and Involvement of AI\
234
+ \ Actors \nAI Actors should be aware of their roles in reporting AI incidents.\
235
+ \ To better understand previous incidents \nand implement measures to prevent\
236
+ \ similar ones in the future, organizations could consider developing \nguidelines\
237
+ \ for publicly available incident reporting which include information about AI\
238
+ \ actor \nresponsibilities. These guidelines would help AI system operators identify\
239
+ \ GAI incidents across the AI \nlifecycle and with AI Actors regardless of role.\
240
+ \ Documentation and review of third-party inputs and \nplugins for GAI systems\
241
+ \ is especially important for AI Actors in the context of incident disclosure;\
242
+ \ LLM \ninputs and content delivered through these plugins is often distributed,\
243
+ \ with inconsistent or insufficient \naccess control. \nDocumentation practices\
244
+ \ including logging, recording, and analyzing GAI incidents can facilitate \n\
245
+ smoother sharing of information with relevant AI Actors. Regular information sharing,\
246
+ \ change \nmanagement records, version history and metadata can also empower AI\
247
+ \ Actors responding to and \nmanaging AI incidents."
248
+ - "23 \nMP-1.1-002 \nDetermine and document the expected and acceptable GAI system\
249
+ \ context of \nuse in collaboration with socio-cultural and other domain experts,\
250
+ \ by assessing: \nAssumptions and limitations; Direct value to the organization;\
251
+ \ Intended \noperational environment and observed usage patterns; Potential positive\
252
+ \ and \nnegative impacts to individuals, public safety, groups, communities, \n\
253
+ organizations, democratic institutions, and the physical environment; Social \n\
254
+ norms and expectations. \nHarmful Bias and Homogenization \nMP-1.1-003 \nDocument\
255
+ \ risk measurement plans to address identified risks. Plans may \ninclude, as applicable:\
256
+ \ Individual and group cognitive biases (e.g., confirmation \nbias, funding bias,\
257
+ \ groupthink) for AI Actors involved in the design, \nimplementation, and use\
258
+ \ of GAI systems; Known past GAI system incidents and \nfailure modes; In-context\
259
+ \ use and foreseeable misuse, abuse, and off-label use; \nOver reliance on quantitative\
260
+ \ metrics and methodologies without sufficient \nawareness of their limitations\
261
+ \ in the context(s) of use; Standard measurement \nand structured human feedback\
262
+ \ approaches; Anticipated human-AI \nconfigurations. \nHuman-AI Configuration; Harmful\
263
+ \ \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent \nMP-1.1-004\
264
+ \ \nIdentify and document foreseeable illegal uses or applications of the GAI\
265
+ \ system \nthat surpass organizational risk tolerances. \nCBRN Information or\
266
+ \ Capabilities; \nDangerous, Violent, or Hateful \nContent; Obscene, Degrading,\
267
+ \ \nand/or Abusive Content \nAI Actor Tasks: AI Deployment \n \nMAP 1.2: Interdisciplinary\
268
+ \ AI Actors, competencies, skills, and capacities for establishing context reflect\
269
+ \ demographic diversity and \nbroad domain and user experience expertise, and\
270
+ \ their participation is documented. Opportunities for interdisciplinary \ncollaboration\
271
+ \ are prioritized. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.2-001 \n\
272
+ Establish and empower interdisciplinary teams that reflect a wide range of \ncapabilities,\
273
+ \ competencies, demographic groups, domain expertise, educational \nbackgrounds,\
274
+ \ lived experiences, professions, and skills across the enterprise to \ninform\
275
+ \ and conduct risk measurement and management functions. \nHuman-AI Configuration;\
276
+ \ Harmful \nBias and Homogenization \nMP-1.2-002 \nVerify that data or benchmarks\
277
+ \ used in risk measurement, and users, \nparticipants, or subjects involved in\
278
+ \ structured GAI public feedback exercises \nare representative of diverse in-context\
279
+ \ user populations. \nHuman-AI Configuration; Harmful \nBias and Homogenization\
280
+ \ \nAI Actor Tasks: AI Deployment"
281
+ - "NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence\
282
+ \ Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n\
283
+ \ \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1"
284
+ - source_sentence: What is the purpose of the AI Risk Management Framework (AI RMF)
285
+ for Generative AI as outlined in the document?
286
+ sentences:
287
+ - "Table of Contents \n1. \nIntroduction ..............................................................................................................................................1\
288
+ \ \n2. \nOverview of Risks Unique to or Exacerbated by GAI .....................................................................2\
289
+ \ \n3. \nSuggested Actions to Manage GAI Risks .........................................................................................\
290
+ \ 12 \nAppendix A. Primary GAI Considerations ...............................................................................................\
291
+ \ 47 \nAppendix B. References ................................................................................................................................\
292
+ \ 54"
293
+ - "13 \n• \nNot every suggested action applies to every AI Actor14 or is relevant\
294
+ \ to every AI Actor Task. For \nexample, suggested actions relevant to GAI developers\
295
+ \ may not be relevant to GAI deployers. \nThe applicability of suggested actions\
296
+ \ to relevant AI actors should be determined based on \norganizational considerations\
297
+ \ and their unique uses of GAI systems. \nEach table of suggested actions includes:\
298
+ \ \n• \nAction ID: Each Action ID corresponds to the relevant AI RMF function\
299
+ \ and subcategory (e.g., GV-\n1.1-001 corresponds to the first suggested action\
300
+ \ for Govern 1.1, GV-1.1-002 corresponds to the \nsecond suggested action for\
301
+ \ Govern 1.1). AI RMF functions are tagged as follows: GV = Govern; \nMP = Map;\
302
+ \ MS = Measure; MG = Manage. \n• \nSuggested Action: Steps an organization or\
303
+ \ AI actor can take to manage GAI risks. \n• \nGAI Risks: Tags linking suggested\
304
+ \ actions with relevant GAI risks. \n• \nAI Actor Tasks: Pertinent AI Actor Tasks\
305
+ \ for each subcategory. Not every AI Actor Task listed will \napply to every suggested\
306
+ \ action in the subcategory (i.e., some apply to AI development and \nothers apply\
307
+ \ to AI deployment). \nThe tables below begin with the AI RMF subcategory, shaded\
308
+ \ in blue, followed by suggested actions. \n \nGOVERN 1.1: Legal and regulatory\
309
+ \ requirements involving AI are understood, managed, and documented. \nAction\
310
+ \ ID \nSuggested Action \nGAI Risks \nGV-1.1-001 Align GAI development and use\
311
+ \ with applicable laws and regulations, including \nthose related to data privacy,\
312
+ \ copyright and intellectual property law. \nData Privacy; Harmful Bias and \n\
313
+ Homogenization; Intellectual \nProperty \nAI Actor Tasks: Governance and Oversight\
314
+ \ \n \n \n \n14 AI Actors are defined by the OECD as “those who play an active\
315
+ \ role in the AI system lifecycle, including \norganizations and individuals that\
316
+ \ deploy or operate AI.” See Appendix A of the AI RMF for additional descriptions\
317
+ \ \nof AI Actors and AI Actor Tasks."
318
+ - "1 \n1. \nIntroduction \nThis document is a cross-sectoral profile of and companion\
319
+ \ resource for the AI Risk Management \nFramework (AI RMF 1.0) for Generative\
320
+ \ AI,1 pursuant to President Biden’s Executive Order (EO) 14110 on \nSafe, Secure,\
321
+ \ and Trustworthy Artificial Intelligence.2 The AI RMF was released in January\
322
+ \ 2023, and is \nintended for voluntary use and to improve the ability of organizations\
323
+ \ to incorporate trustworthiness \nconsiderations into the design, development,\
324
+ \ use, and evaluation of AI products, services, and systems. \nA profile is an\
325
+ \ implementation of the AI RMF functions, categories, and subcategories for a\
326
+ \ specific \nsetting, application, or technology – in this case, Generative AI\
327
+ \ (GAI) – based on the requirements, risk \ntolerance, and resources of the Framework\
328
+ \ user. AI RMF profiles assist organizations in deciding how to \nbest manage AI\
329
+ \ risks in a manner that is well-aligned with their goals, considers legal/regulatory\
330
+ \ \nrequirements and best practices, and reflects risk management priorities. Consistent\
331
+ \ with other AI RMF \nprofiles, this profile offers insights into how risk can be\
332
+ \ managed across various stages of the AI lifecycle \nand for GAI as a technology.\
333
+ \ \nAs GAI covers risks of models or applications that can be used across use\
334
+ \ cases or sectors, this document \nis an AI RMF cross-sectoral profile. Cross-sectoral\
335
+ \ profiles can be used to govern, map, measure, and \nmanage risks associated with\
336
+ \ activities or business processes common across sectors, such as the use of \n\
337
+ large language models (LLMs), cloud-based services, or acquisition. \nThis document\
338
+ \ defines risks that are novel to or exacerbated by the use of GAI. After introducing\
339
+ \ and \ndescribing these risks, the document provides a set of suggested actions\
340
+ \ to help organizations govern, \nmap, measure, and manage these risks. \n \n\
341
+ \ \n1 EO 14110 defines Generative AI as “the class of AI models that emulate the\
342
+ \ structure and characteristics of input \ndata in order to generate derived synthetic\
343
+ \ content. This can include images, videos, audio, text, and other digital \n\
344
+ content.” While not all GAI is derived from foundation models, for purposes of\
345
+ \ this document, GAI generally refers \nto generative foundation models. The foundation\
346
+ \ model subcategory of “dual-use foundation models” is defined by \nEO 14110 as\
347
+ \ “an AI model that is trained on broad data; generally uses self-supervision;\
348
+ \ contains at least tens of \nbillions of parameters; is applicable across a wide\
349
+ \ range of contexts.” \n2 This profile was developed per Section 4.1(a)(i)(A)\
350
+ \ of EO 14110, which directs the Secretary of Commerce, acting \nthrough the Director\
351
+ \ of the National Institute of Standards and Technology (NIST), to develop a companion\
352
+ \ \nresource to the AI RMF, NIST AI 100–1, for generative AI."
353
+ - source_sentence: What are the primary information security risks associated with
354
+ GAI-based systems in the context of cybersecurity?
355
+ sentences:
356
+ - "7 \nunethical behavior. Text-to-image models also make it easy to create images\
357
+ \ that could be used to \npromote dangerous or violent messages. Similar concerns\
358
+ \ are present for other GAI media, including \nvideo and audio. GAI may also produce\
359
+ \ content that recommends self-harm or criminal/illegal activities. \nMany current\
360
+ \ systems restrict model outputs to limit certain content or in response to certain\
361
+ \ prompts, \nbut this approach may still produce harmful recommendations in response\
362
+ \ to other less-explicit, novel \nprompts (also relevant to CBRN Information or\
363
+ \ Capabilities, Data Privacy, Information Security, and \nObscene, Degrading and/or\
364
+ \ Abusive Content). Crafting such prompts deliberately is known as \n“jailbreaking,”\
365
+ \ or, manipulating prompts to circumvent output controls. Limitations of GAI systems\
366
+ \ can be \nharmful or dangerous in certain contexts. Studies have observed that\
367
+ \ users may disclose mental health \nissues in conversations with chatbots – and\
368
+ \ that users exhibit negative reactions to unhelpful responses \nfrom these chatbots\
369
+ \ during situations of distress. \nThis risk encompasses difficulty controlling\
370
+ \ creation of and public exposure to offensive or hateful \nlanguage, and denigrating\
371
+ \ or stereotypical content generated by AI. This kind of speech may contribute\
372
+ \ \nto downstream harm such as fueling dangerous or violent behaviors. The spread\
373
+ \ of denigrating or \nstereotypical content can also further exacerbate representational\
374
+ \ harms (see Harmful Bias and \nHomogenization below). \nTrustworthy AI Characteristics:\
375
+ \ Safe, Secure and Resilient \n2.4. Data Privacy \nGAI systems raise several risks\
376
+ \ to privacy. GAI system training requires large volumes of data, which in \n\
377
+ some cases may include personal data. The use of personal data for GAI training\
378
+ \ raises risks to widely \naccepted privacy principles, including to transparency,\
379
+ \ individual participation (including consent), and \npurpose specification. For\
380
+ \ example, most model developers do not disclose specific data sources on \nwhich\
381
+ \ models were trained, limiting user awareness of whether personally identifiably\
382
+ \ information (PII) \nwas trained on and, if so, how it was collected. \nModels\
383
+ \ may leak, generate, or correctly infer sensitive information about individuals.\
384
+ \ For example, \nduring adversarial attacks, LLMs have revealed sensitive information\
385
+ \ (from the public domain) that was \nincluded in their training data. This problem\
386
+ \ has been referred to as data memorization, and may pose \nexacerbated privacy\
387
+ \ risks even for data present only in a small number of training samples. \n\
388
+ In addition to revealing sensitive information in GAI training data, GAI models\
389
+ \ may be able to correctly \ninfer PII or sensitive data that was not in their\
390
+ \ training data nor disclosed by the user by stitching \ntogether information\
391
+ \ from disparate sources. These inferences can have negative impact on an individual\
392
+ \ \neven if the inferences are not accurate (e.g., confabulations), and especially\
393
+ \ if they reveal information \nthat the individual considers sensitive or that\
394
+ \ is used to disadvantage or harm them. \nBeyond harms from information exposure\
395
+ \ (such as extortion or dignitary harm), wrong or inappropriate \ninferences of\
396
+ \ PII can contribute to downstream or secondary harmful impacts. For example,\
397
+ \ predictive \ninferences made by GAI models based on PII or protected attributes\
398
+ \ can contribute to adverse decisions, \nleading to representational or allocative\
399
+ \ harms to individuals or groups (see Harmful Bias and \nHomogenization below)."
400
+ - "10 \nGAI systems can ease the unintentional production or dissemination of false,\
401
+ \ inaccurate, or misleading \ncontent (misinformation) at scale, particularly\
402
+ \ if the content stems from confabulations. \nGAI systems can also ease the deliberate\
403
+ \ production or dissemination of false or misleading information \n(disinformation)\
404
+ \ at scale, where an actor has the explicit intent to deceive or cause harm to\
405
+ \ others. Even \nvery subtle changes to text or images can manipulate human and\
406
+ \ machine perception. \nSimilarly, GAI systems could enable a higher degree of\
407
+ \ sophistication for malicious actors to produce \ndisinformation that is targeted\
408
+ \ towards specific demographics. Current and emerging multimodal models \nmake\
409
+ \ it possible to generate both text-based disinformation and highly realistic\
410
+ \ “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12\
411
+ \ Additional disinformation threats could be \nenabled by future GAI models trained\
412
+ \ on new data modalities. \nDisinformation and misinformation – both of which\
413
+ \ may be facilitated by GAI – may erode public trust in \ntrue or valid evidence\
414
+ \ and information, with downstream effects. For example, a synthetic image of a\
415
+ \ \nPentagon blast went viral and briefly caused a drop in the stock market. Generative\
416
+ \ AI models can also \nassist malicious actors in creating compelling imagery\
417
+ \ and propaganda to support disinformation \ncampaigns, which may not be photorealistic,\
418
+ \ but could enable these campaigns to gain more reach and \nengagement on social\
419
+ \ media platforms. Additionally, generative AI models can assist malicious actors\
420
+ \ in \ncreating fraudulent content intended to impersonate others. \nTrustworthy\
421
+ \ AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable\
422
+ \ and \nExplainable \n2.9. Information Security \nInformation security for computer\
423
+ \ systems and data is a mature field with widely accepted and \nstandardized practices\
424
+ \ for offensive and defensive cyber capabilities. GAI-based systems present two\
425
+ \ \nprimary information security risks: GAI could potentially discover or enable\
426
+ \ new cybersecurity risks by \nlowering the barriers for or easing automated exercise\
427
+ \ of offensive capabilities; simultaneously, it \nexpands the available attack\
428
+ \ surface, as GAI itself is vulnerable to attacks like prompt injection or data\
429
+ \ \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment\
430
+ \ cybersecurity attacks such as \nhacking, malware, and phishing. Reports have\
431
+ \ indicated that LLMs are already able to discover some \nvulnerabilities in systems\
432
+ \ (hardware, software, data) and write code to exploit them. Sophisticated threat\
433
+ \ \nactors might further these risks by developing GAI-powered security co-pilots\
434
+ \ for use in several parts of \nthe attack chain, including informing attackers\
435
+ \ on how to proactively evade threat detection and escalate \nprivileges after\
436
+ \ gaining system access. \nInformation security for GAI models and systems also\
437
+ \ includes maintaining availability of the GAI system \nand the integrity and\
438
+ \ (when applicable) the confidentiality of the GAI code, training data, and model\
439
+ \ \nweights. To identify and secure potential attack points in AI systems or specific\
440
+ \ components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4,\
441
+ \ to be published."
442
+ - "16 \nGOVERN 1.5: Ongoing monitoring and periodic review of the risk management\
443
+ \ process and its outcomes are planned, and \norganizational roles and responsibilities\
444
+ \ are clearly defined, including determining the frequency of periodic review.\
445
+ \ \nAction ID \nSuggested Action \nGAI Risks \nGV-1.5-001 Define organizational\
446
+ \ responsibilities for periodic review of content provenance \nand incident monitoring\
447
+ \ for GAI systems. \nInformation Integrity \nGV-1.5-002 \nEstablish organizational\
448
+ \ policies and procedures for after action reviews of GAI \nsystem incident response\
449
+ \ and incident disclosures, to identify gaps; Update \nincident response and incident\
450
+ \ disclosure processes as required. \nHuman-AI Configuration; \nInformation Security\
451
+ \ \nGV-1.5-003 \nMaintain a document retention policy to keep history for test,\
452
+ \ evaluation, \nvalidation, and verification (TEVV), and digital content transparency\
453
+ \ methods for \nGAI. \nInformation Integrity; Intellectual \nProperty \nAI Actor\
454
+ \ Tasks: Governance and Oversight, Operation and Monitoring \n \nGOVERN 1.6: Mechanisms\
455
+ \ are in place to inventory AI systems and are resourced according to organizational\
456
+ \ risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.6-001 Enumerate\
457
+ \ organizational GAI systems for incorporation into AI system inventory \nand\
458
+ \ adjust AI system inventory requirements to account for GAI risks. \nInformation\
459
+ \ Security \nGV-1.6-002 Define any inventory exemptions in organizational policies\
460
+ \ for GAI systems \nembedded into application software. \nValue Chain and Component\
461
+ \ \nIntegration \nGV-1.6-003 \nIn addition to general model, governance, and risk\
462
+ \ information, consider the \nfollowing items in GAI system inventory entries:\
463
+ \ Data provenance information \n(e.g., source, signatures, versioning, watermarks);\
464
+ \ Known issues reported from \ninternal bug tracking or external information sharing\
465
+ \ resources (e.g., AI incident \ndatabase, AVID, CVE, NVD, or OECD AI incident\
466
+ \ monitor); Human oversight roles \nand responsibilities; Special rights and considerations\
467
+ \ for intellectual property, \nlicensed works, or personal, privileged, proprietary\
468
+ \ or sensitive data; Underlying \nfoundation models, versions of underlying models,\
469
+ \ and access modes. \nData Privacy; Human-AI \nConfiguration; Information \nIntegrity;\
470
+ \ Intellectual Property; \nValue Chain and Component \nIntegration \nAI Actor\
471
+ \ Tasks: Governance and Oversight"
472
+ ---
473
+
474
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
475
+
476
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
477
+
478
+ ## Model Details
479
+
480
+ ### Model Description
481
+ - **Model Type:** Sentence Transformer
482
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
483
+ - **Maximum Sequence Length:** 256 tokens
484
+ - **Output Dimensionality:** 384 tokens
485
+ - **Similarity Function:** Cosine Similarity
486
+ <!-- - **Training Dataset:** Unknown -->
487
+ <!-- - **Language:** Unknown -->
488
+ <!-- - **License:** Unknown -->
489
+
490
+ ### Model Sources
491
+
492
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
493
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
494
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
495
+
496
+ ### Full Model Architecture
497
+
498
+ ```
499
+ SentenceTransformer(
500
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
501
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
502
+ (2): Normalize()
503
+ )
504
+ ```
505
+
506
+ ## Usage
507
+
508
+ ### Direct Usage (Sentence Transformers)
509
+
510
+ First install the Sentence Transformers library:
511
+
512
+ ```bash
513
+ pip install -U sentence-transformers
514
+ ```
515
+
516
+ Then you can load this model and run inference.
517
+ ```python
518
+ from sentence_transformers import SentenceTransformer
519
+
520
+ # Download from the 🤗 Hub
521
+ model = SentenceTransformer("danicafisher/dfisher-fine-tuned-sentence-transformer")
522
+ # Run inference
523
+ sentences = [
524
+ 'What are the primary information security risks associated with GAI-based systems in the context of cybersecurity?',
525
+ '10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations. \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.',
526
+ '7 \nunethical behavior. Text-to-image models also make it easy to create images that could be used to \npromote dangerous or violent messages. Similar concerns are present for other GAI media, including \nvideo and audio. GAI may also produce content that recommends self-harm or criminal/illegal activities. \nMany current systems restrict model outputs to limit certain content or in response to certain prompts, \nbut this approach may still produce harmful recommendations in response to other less-explicit, novel \nprompts (also relevant to CBRN Information or Capabilities, Data Privacy, Information Security, and \nObscene, Degrading and/or Abusive Content). Crafting such prompts deliberately is known as \n“jailbreaking,” or, manipulating prompts to circumvent output controls. Limitations of GAI systems can be \nharmful or dangerous in certain contexts. Studies have observed that users may disclose mental health \nissues in conversations with chatbots – and that users exhibit negative reactions to unhelpful responses \nfrom these chatbots during situations of distress. \nThis risk encompasses difficulty controlling creation of and public exposure to offensive or hateful \nlanguage, and denigrating or stereotypical content generated by AI. This kind of speech may contribute \nto downstream harm such as fueling dangerous or violent behaviors. The spread of denigrating or \nstereotypical content can also further exacerbate representational harms (see Harmful Bias and \nHomogenization below). \nTrustworthy AI Characteristics: Safe, Secure and Resilient \n2.4. Data Privacy \nGAI systems raise several risks to privacy. GAI system training requires large volumes of data, which in \nsome cases may include personal data. The use of personal data for GAI training raises risks to widely \naccepted privacy principles, including to transparency, individual participation (including consent), and \npurpose specification. For example, most model developers do not disclose specific data sources on \nwhich models were trained, limiting user awareness of whether personally identifiably information (PII) \nwas trained on and, if so, how it was collected. \nModels may leak, generate, or correctly infer sensitive information about individuals. For example, \nduring adversarial attacks, LLMs have revealed sensitive information (from the public domain) that was \nincluded in their training data. This problem has been referred to as data memorization, and may pose \nexacerbated privacy risks even for data present only in a small number of training samples. \nIn addition to revealing sensitive information in GAI training data, GAI models may be able to correctly \ninfer PII or sensitive data that was not in their training data nor disclosed by the user by stitching \ntogether information from disparate sources. These inferences can have negative impact on an individual \neven if the inferences are not accurate (e.g., confabulations), and especially if they reveal information \nthat the individual considers sensitive or that is used to disadvantage or harm them. \nBeyond harms from information exposure (such as extortion or dignitary harm), wrong or inappropriate \ninferences of PII can contribute to downstream or secondary harmful impacts. For example, predictive \ninferences made by GAI models based on PII or protected attributes can contribute to adverse decisions, \nleading to representational or allocative harms to individuals or groups (see Harmful Bias and \nHomogenization below).',
527
+ ]
528
+ embeddings = model.encode(sentences)
529
+ print(embeddings.shape)
530
+ # [3, 384]
531
+
532
+ # Get the similarity scores for the embeddings
533
+ similarities = model.similarity(embeddings, embeddings)
534
+ print(similarities.shape)
535
+ # [3, 3]
536
+ ```
537
+
538
+ <!--
539
+ ### Direct Usage (Transformers)
540
+
541
+ <details><summary>Click to see the direct usage in Transformers</summary>
542
+
543
+ </details>
544
+ -->
545
+
546
+ <!--
547
+ ### Downstream Usage (Sentence Transformers)
548
+
549
+ You can finetune this model on your own dataset.
550
+
551
+ <details><summary>Click to expand</summary>
552
+
553
+ </details>
554
+ -->
555
+
556
+ <!--
557
+ ### Out-of-Scope Use
558
+
559
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
560
+ -->
561
+
562
+ <!--
563
+ ## Bias, Risks and Limitations
564
+
565
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
566
+ -->
567
+
568
+ <!--
569
+ ### Recommendations
570
+
571
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
572
+ -->
573
+
574
+ ## Training Details
575
+
576
+ ### Training Dataset
577
+
578
+ #### Unnamed Dataset
579
+
580
+
581
+ * Size: 128 training samples
582
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
583
+ * Approximate statistics based on the first 128 samples:
584
+ | | sentence_0 | sentence_1 |
585
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
586
+ | type | string | string |
587
+ | details | <ul><li>min: 17 tokens</li><li>mean: 23.14 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 56 tokens</li><li>mean: 247.42 tokens</li><li>max: 256 tokens</li></ul> |
588
+ * Samples:
589
+ | sentence_0 | sentence_1 |
590
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
591
+ | <code>How should fairness assessments be conducted to measure systemic bias across demographic groups in GAI systems?</code> | <code>36 <br>MEASURE 2.11: Fairness and bias – as identified in the MAP function – are evaluated and results are documented. <br>Action ID <br>Suggested Action <br>GAI Risks <br>MS-2.11-001 <br>Apply use-case appropriate benchmarks (e.g., Bias Benchmark Questions, Real <br>Hateful or Harmful Prompts, Winogender Schemas15) to quantify systemic bias, <br>stereotyping, denigration, and hateful content in GAI system outputs; <br>Document assumptions and limitations of benchmarks, including any actual or <br>possible training/test data cross contamination, relative to in-context <br>deployment environment. <br>Harmful Bias and Homogenization <br>MS-2.11-002 <br>Conduct fairness assessments to measure systemic bias. Measure GAI system <br>performance across demographic groups and subgroups, addressing both <br>quality of service and any allocation of services and resources. Quantify harms <br>using: field testing with sub-group populations to determine likelihood of <br>exposure to generated content exhibiting harmful bias, AI red-teaming with <br>counterfactual and low-context (e.g., “leader,” “bad guys”) prompts. For ML <br>pipelines or business processes with categorical or numeric outcomes that rely <br>on GAI, apply general fairness metrics (e.g., demographic parity, equalized odds, <br>equal opportunity, statistical hypothesis tests), to the pipeline or business <br>outcome where appropriate; Custom, context-specific metrics developed in <br>collaboration with domain experts and affected communities; Measurements of <br>the prevalence of denigration in generated content in deployment (e.g., sub-<br>sampling a fraction of traffic and manually annotating denigrating content). <br>Harmful Bias and Homogenization; <br>Dangerous, Violent, or Hateful <br>Content <br>MS-2.11-003 <br>Identify the classes of individuals, groups, or environmental ecosystems which <br>might be impacted by GAI systems through direct engagement with potentially <br>impacted communities. <br>Environmental; Harmful Bias and <br>Homogenization <br>MS-2.11-004 <br>Review, document, and measure sources of bias in GAI training and TEVV data: <br>Differences in distributions of outcomes across and within groups, including <br>intersecting groups; Completeness, representativeness, and balance of data <br>sources; demographic group and subgroup coverage in GAI system training <br>data; Forms of latent systemic bias in images, text, audio, embeddings, or other <br>complex or unstructured data; Input data features that may serve as proxies for <br>demographic group membership (i.e., image metadata, language dialect) or <br>otherwise give rise to emergent bias within GAI systems; The extent to which <br>the digital divide may negatively impact representativeness in GAI system <br>training and TEVV data; Filtering of hate speech or content in GAI system <br>training data; Prevalence of GAI-generated data in GAI system training data. <br>Harmful Bias and Homogenization <br> <br> <br>15 Winogender Schemas is a sample set of paired sentences which differ only by gender of the pronouns used, <br>which can be used to evaluate gender bias in natural language processing coreference resolution systems.</code> |
592
+ | <code>How should organizations adjust their AI system inventory requirements to account for GAI risks?</code> | <code>16 <br>GOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, and <br>organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review. <br>Action ID <br>Suggested Action <br>GAI Risks <br>GV-1.5-001 Define organizational responsibilities for periodic review of content provenance <br>and incident monitoring for GAI systems. <br>Information Integrity <br>GV-1.5-002 <br>Establish organizational policies and procedures for after action reviews of GAI <br>system incident response and incident disclosures, to identify gaps; Update <br>incident response and incident disclosure processes as required. <br>Human-AI Configuration; <br>Information Security <br>GV-1.5-003 <br>Maintain a document retention policy to keep history for test, evaluation, <br>validation, and verification (TEVV), and digital content transparency methods for <br>GAI. <br>Information Integrity; Intellectual <br>Property <br>AI Actor Tasks: Governance and Oversight, Operation and Monitoring <br> <br>GOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities. <br>Action ID <br>Suggested Action <br>GAI Risks <br>GV-1.6-001 Enumerate organizational GAI systems for incorporation into AI system inventory <br>and adjust AI system inventory requirements to account for GAI risks. <br>Information Security <br>GV-1.6-002 Define any inventory exemptions in organizational policies for GAI systems <br>embedded into application software. <br>Value Chain and Component <br>Integration <br>GV-1.6-003 <br>In addition to general model, governance, and risk information, consider the <br>following items in GAI system inventory entries: Data provenance information <br>(e.g., source, signatures, versioning, watermarks); Known issues reported from <br>internal bug tracking or external information sharing resources (e.g., AI incident <br>database, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles <br>and responsibilities; Special rights and considerations for intellectual property, <br>licensed works, or personal, privileged, proprietary or sensitive data; Underlying <br>foundation models, versions of underlying models, and access modes. <br>Data Privacy; Human-AI <br>Configuration; Information <br>Integrity; Intellectual Property; <br>Value Chain and Component <br>Integration <br>AI Actor Tasks: Governance and Oversight</code> |
593
+ | <code>What framework is suggested for evaluating and monitoring third-party entities' performance and adherence to content provenance standards?</code> | <code>21 <br>GV-6.1-005 <br>Implement a use-cased based supplier risk assessment framework to evaluate and <br>monitor third-party entities’ performance and adherence to content provenance <br>standards and technologies to detect anomalies and unauthorized changes; <br>services acquisition and value chain risk management; and legal compliance. <br>Data Privacy; Information <br>Integrity; Information Security; <br>Intellectual Property; Value Chain <br>and Component Integration <br>GV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party <br>GAI processes and standards. <br>Information Integrity <br>GV-6.1-007 Inventory all third-party entities with access to organizational content and <br>establish approved GAI technology and service provider lists. <br>Value Chain and Component <br>Integration <br>GV-6.1-008 Maintain records of changes to content made by third parties to promote content <br>provenance, including sources, timestamps, metadata. <br>Information Integrity; Value Chain <br>and Component Integration; <br>Intellectual Property <br>GV-6.1-009 <br>Update and integrate due diligence processes for GAI acquisition and <br>procurement vendor assessments to include intellectual property, data privacy, <br>security, and other risks. For example, update processes to: Address solutions that <br>may rely on embedded GAI technologies; Address ongoing monitoring, <br>assessments, and alerting, dynamic risk assessments, and real-time reporting <br>tools for monitoring third-party GAI risks; Consider policy adjustments across GAI <br>modeling libraries, tools and APIs, fine-tuned models, and embedded tools; <br>Assess GAI vendors, open-source or proprietary GAI tools, or GAI service <br>providers against incident or vulnerability databases. <br>Data Privacy; Human-AI <br>Configuration; Information <br>Security; Intellectual Property; <br>Value Chain and Component <br>Integration; Harmful Bias and <br>Homogenization <br>GV-6.1-010 <br>Update GAI acceptable use policies to address proprietary and open-source GAI <br>technologies and data, and contractors, consultants, and other third-party <br>personnel. <br>Intellectual Property; Value Chain <br>and Component Integration <br>AI Actor Tasks: Operation and Monitoring, Procurement, Third-party entities <br> <br>GOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be <br>high-risk. <br>Action ID <br>Suggested Action <br>GAI Risks <br>GV-6.2-001 <br>Document GAI risks associated with system value chain to identify over-reliance <br>on third-party data and to identify fallbacks. <br>Value Chain and Component <br>Integration <br>GV-6.2-002 <br>Document incidents involving third-party GAI data and systems, including open-<br>data and open-source software. <br>Intellectual Property; Value Chain <br>and Component Integration</code> |
594
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
595
+ ```json
596
+ {
597
+ "scale": 20.0,
598
+ "similarity_fct": "cos_sim"
599
+ }
600
+ ```
601
+
602
+ ### Training Hyperparameters
603
+ #### Non-Default Hyperparameters
604
+
605
+ - `per_device_train_batch_size`: 16
606
+ - `per_device_eval_batch_size`: 16
607
+ - `multi_dataset_batch_sampler`: round_robin
608
+
609
+ #### All Hyperparameters
610
+ <details><summary>Click to expand</summary>
611
+
612
+ - `overwrite_output_dir`: False
613
+ - `do_predict`: False
614
+ - `eval_strategy`: no
615
+ - `prediction_loss_only`: True
616
+ - `per_device_train_batch_size`: 16
617
+ - `per_device_eval_batch_size`: 16
618
+ - `per_gpu_train_batch_size`: None
619
+ - `per_gpu_eval_batch_size`: None
620
+ - `gradient_accumulation_steps`: 1
621
+ - `eval_accumulation_steps`: None
622
+ - `torch_empty_cache_steps`: None
623
+ - `learning_rate`: 5e-05
624
+ - `weight_decay`: 0.0
625
+ - `adam_beta1`: 0.9
626
+ - `adam_beta2`: 0.999
627
+ - `adam_epsilon`: 1e-08
628
+ - `max_grad_norm`: 1
629
+ - `num_train_epochs`: 3
630
+ - `max_steps`: -1
631
+ - `lr_scheduler_type`: linear
632
+ - `lr_scheduler_kwargs`: {}
633
+ - `warmup_ratio`: 0.0
634
+ - `warmup_steps`: 0
635
+ - `log_level`: passive
636
+ - `log_level_replica`: warning
637
+ - `log_on_each_node`: True
638
+ - `logging_nan_inf_filter`: True
639
+ - `save_safetensors`: True
640
+ - `save_on_each_node`: False
641
+ - `save_only_model`: False
642
+ - `restore_callback_states_from_checkpoint`: False
643
+ - `no_cuda`: False
644
+ - `use_cpu`: False
645
+ - `use_mps_device`: False
646
+ - `seed`: 42
647
+ - `data_seed`: None
648
+ - `jit_mode_eval`: False
649
+ - `use_ipex`: False
650
+ - `bf16`: False
651
+ - `fp16`: False
652
+ - `fp16_opt_level`: O1
653
+ - `half_precision_backend`: auto
654
+ - `bf16_full_eval`: False
655
+ - `fp16_full_eval`: False
656
+ - `tf32`: None
657
+ - `local_rank`: 0
658
+ - `ddp_backend`: None
659
+ - `tpu_num_cores`: None
660
+ - `tpu_metrics_debug`: False
661
+ - `debug`: []
662
+ - `dataloader_drop_last`: False
663
+ - `dataloader_num_workers`: 0
664
+ - `dataloader_prefetch_factor`: None
665
+ - `past_index`: -1
666
+ - `disable_tqdm`: False
667
+ - `remove_unused_columns`: True
668
+ - `label_names`: None
669
+ - `load_best_model_at_end`: False
670
+ - `ignore_data_skip`: False
671
+ - `fsdp`: []
672
+ - `fsdp_min_num_params`: 0
673
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
674
+ - `fsdp_transformer_layer_cls_to_wrap`: None
675
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
676
+ - `deepspeed`: None
677
+ - `label_smoothing_factor`: 0.0
678
+ - `optim`: adamw_torch
679
+ - `optim_args`: None
680
+ - `adafactor`: False
681
+ - `group_by_length`: False
682
+ - `length_column_name`: length
683
+ - `ddp_find_unused_parameters`: None
684
+ - `ddp_bucket_cap_mb`: None
685
+ - `ddp_broadcast_buffers`: False
686
+ - `dataloader_pin_memory`: True
687
+ - `dataloader_persistent_workers`: False
688
+ - `skip_memory_metrics`: True
689
+ - `use_legacy_prediction_loop`: False
690
+ - `push_to_hub`: False
691
+ - `resume_from_checkpoint`: None
692
+ - `hub_model_id`: None
693
+ - `hub_strategy`: every_save
694
+ - `hub_private_repo`: False
695
+ - `hub_always_push`: False
696
+ - `gradient_checkpointing`: False
697
+ - `gradient_checkpointing_kwargs`: None
698
+ - `include_inputs_for_metrics`: False
699
+ - `eval_do_concat_batches`: True
700
+ - `fp16_backend`: auto
701
+ - `push_to_hub_model_id`: None
702
+ - `push_to_hub_organization`: None
703
+ - `mp_parameters`:
704
+ - `auto_find_batch_size`: False
705
+ - `full_determinism`: False
706
+ - `torchdynamo`: None
707
+ - `ray_scope`: last
708
+ - `ddp_timeout`: 1800
709
+ - `torch_compile`: False
710
+ - `torch_compile_backend`: None
711
+ - `torch_compile_mode`: None
712
+ - `dispatch_batches`: None
713
+ - `split_batches`: None
714
+ - `include_tokens_per_second`: False
715
+ - `include_num_input_tokens_seen`: False
716
+ - `neftune_noise_alpha`: None
717
+ - `optim_target_modules`: None
718
+ - `batch_eval_metrics`: False
719
+ - `eval_on_start`: False
720
+ - `eval_use_gather_object`: False
721
+ - `batch_sampler`: batch_sampler
722
+ - `multi_dataset_batch_sampler`: round_robin
723
+
724
+ </details>
725
+
726
+ ### Framework Versions
727
+ - Python: 3.10.12
728
+ - Sentence Transformers: 3.1.1
729
+ - Transformers: 4.44.2
730
+ - PyTorch: 2.4.1+cu121
731
+ - Accelerate: 0.34.2
732
+ - Datasets: 3.0.0
733
+ - Tokenizers: 0.19.1
734
+
735
+ ## Citation
736
+
737
+ ### BibTeX
738
+
739
+ #### Sentence Transformers
740
+ ```bibtex
741
+ @inproceedings{reimers-2019-sentence-bert,
742
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
743
+ author = "Reimers, Nils and Gurevych, Iryna",
744
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
745
+ month = "11",
746
+ year = "2019",
747
+ publisher = "Association for Computational Linguistics",
748
+ url = "https://arxiv.org/abs/1908.10084",
749
+ }
750
+ ```
751
+
752
+ #### MultipleNegativesRankingLoss
753
+ ```bibtex
754
+ @misc{henderson2017efficient,
755
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
756
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
757
+ year={2017},
758
+ eprint={1705.00652},
759
+ archivePrefix={arXiv},
760
+ primaryClass={cs.CL}
761
+ }
762
+ ```
763
+
764
+ <!--
765
+ ## Glossary
766
+
767
+ *Clearly define terms in order to be accessible across audiences.*
768
+ -->
769
+
770
+ <!--
771
+ ## Model Card Authors
772
+
773
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
774
+ -->
775
+
776
+ <!--
777
+ ## Model Card Contact
778
+
779
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
780
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/all-MiniLM-L6-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.44.2",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.4.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b90993e66a57ee51c90e3a3c9307dfaec64b70932da616d38b43da7ce9153e3
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 128,
50
+ "model_max_length": 256,
51
+ "never_split": null,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "[PAD]",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "[SEP]",
57
+ "stride": 0,
58
+ "strip_accents": null,
59
+ "tokenize_chinese_chars": true,
60
+ "tokenizer_class": "BertTokenizer",
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "[UNK]"
64
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff