Update README.md
Browse files
README.md
CHANGED
@@ -155,8 +155,27 @@ The model can be further finetuned for different downstream tasks, such as:
|
|
155 |
|
156 |
<!-- {{ bias_risks_limitations | default("[More Information Needed]", true)}} -->
|
157 |
|
158 |
-
Please note that this model is not specifically designed or evaluated for all downstream purposes.
|
159 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
160 |
|
161 |
|
162 |
### Recommendations
|
|
|
155 |
|
156 |
<!-- {{ bias_risks_limitations | default("[More Information Needed]", true)}} -->
|
157 |
|
158 |
+
Please note that this model is not specifically designed or evaluated for all downstream purposes.
|
159 |
|
160 |
+
The model is not intended to be deployed in production settings. It should not be used in high-risk scenarios, such as military and defense, financial services, and critical infrastructure systems.
|
161 |
+
|
162 |
+
Developers should consider common limitations of multimodal models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case.
|
163 |
+
|
164 |
+
Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Like other multimodal models, Magma can potentially behave in ways that are unfair, unreliable, or offensive.
|
165 |
+
|
166 |
+
The models' outputs do not reflect the opinions of Microsoft.
|
167 |
+
|
168 |
+
Some of the limiting behaviors to be aware of include:
|
169 |
+
|
170 |
+
* **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. Magma is not intended to support multilingual use.
|
171 |
+
|
172 |
+
* **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
|
173 |
+
|
174 |
+
* **Inappropriate or Offensive Content:** These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
|
175 |
+
|
176 |
+
* **Information Reliability:** Multimodal models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
|
177 |
+
|
178 |
+
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended.
|
179 |
|
180 |
|
181 |
### Recommendations
|