Update README.md
Browse files
README.md
CHANGED
@@ -46,6 +46,31 @@ Use of this model to generate content for public consumption or in any applicati
|
|
46 |
|
47 |
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
|
48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
#### Running the model on a CPU
|
50 |
|
51 |
|
@@ -62,10 +87,8 @@ outputs = model.generate(**input_ids)
|
|
62 |
print(tokenizer.decode(outputs[0]))
|
63 |
```
|
64 |
|
65 |
-
|
66 |
#### Running the model on a single / multi GPU
|
67 |
|
68 |
-
|
69 |
```python
|
70 |
# pip install accelerate
|
71 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
@@ -153,7 +176,6 @@ outputs = model.generate(**input_ids)
|
|
153 |
print(tokenizer.decode(outputs[0]))
|
154 |
```
|
155 |
|
156 |
-
|
157 |
#### Other optimizations
|
158 |
|
159 |
* _Flash Attention 2_
|
@@ -224,31 +246,6 @@ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
|
|
224 |
summarized.
|
225 |
* **Output:** Generated English-language text in response to the input, such
|
226 |
as an answer to a question, or a summary of a document.
|
227 |
-
|
228 |
-
## Training Data
|
229 |
-
The "Gemma-2b-it" model was fine-tuned on a dataset comprised of uncensored and toxic content, sourced from various online forums and platforms known for less moderated interactions. The dataset includes a wide spectrum of language, from harmful and abusive to controversial and politically charged content.
|
230 |
-
Futhermore, some of the content was generated by Version 1 of "Svenni551/gemma-2b-it-toxic-dpo-v0.2".
|
231 |
-
|
232 |
-
## Evaluation
|
233 |
-
[More Information Needed]
|
234 |
-
|
235 |
-
## Ethical Considerations
|
236 |
-
### Risks and Harms
|
237 |
-
The model has the potential to generate text that is harmful, offensive, or illegal. Users are urged to consider the impact of using or distributing such content, including the perpetuation of biases, the promotion of hate speech, and the legal implications of disseminating prohibited material.
|
238 |
-
|
239 |
-
### Mitigations
|
240 |
-
Efforts have been made to mitigate potential harms, including:
|
241 |
-
- Restricting access to the model to researchers and developers with a clear and ethical use case.
|
242 |
-
- Implementing safeguards in applications that use this model to filter out or flag generated content deemed harmful or inappropriate.
|
243 |
-
|
244 |
-
## Limitations
|
245 |
-
The model's understanding and generation of content are inherently influenced by its training data. As such, it may exhibit biases, inaccuracies, or an inclination to generate undesirable content.
|
246 |
-
|
247 |
-
## Recommendations
|
248 |
-
Users of this model are advised to:
|
249 |
-
- Clearly define the scope and ethical boundaries of their research or educational projects.
|
250 |
-
- Implement robust content moderation and filtering mechanisms when analyzing the model's outputs.
|
251 |
-
- Engage with ethical review boards or oversight committees when planning research involving this model.
|
252 |
|
253 |
## Model Card Authors
|
254 |
[More Information Needed]
|
|
|
46 |
|
47 |
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
|
48 |
|
49 |
+
## Training Data
|
50 |
+
The "Gemma-2b-it" model was fine-tuned on a dataset comprised of uncensored and toxic content, sourced from various online forums and platforms known for less moderated interactions. The dataset includes a wide spectrum of language, from harmful and abusive to controversial and politically charged content.
|
51 |
+
Futhermore, some of the content was generated by Version 1 of "Svenni551/gemma-2b-it-toxic-dpo-v0.2".
|
52 |
+
|
53 |
+
## Evaluation
|
54 |
+
[More Information Needed]
|
55 |
+
|
56 |
+
## Ethical Considerations
|
57 |
+
### Risks and Harms
|
58 |
+
The model has the potential to generate text that is harmful, offensive, or illegal. Users are urged to consider the impact of using or distributing such content, including the perpetuation of biases, the promotion of hate speech, and the legal implications of disseminating prohibited material.
|
59 |
+
|
60 |
+
### Mitigations
|
61 |
+
Efforts have been made to mitigate potential harms, including:
|
62 |
+
- Restricting access to the model to researchers and developers with a clear and ethical use case.
|
63 |
+
- Implementing safeguards in applications that use this model to filter out or flag generated content deemed harmful or inappropriate.
|
64 |
+
|
65 |
+
## Limitations
|
66 |
+
The model's understanding and generation of content are inherently influenced by its training data. As such, it may exhibit biases, inaccuracies, or an inclination to generate undesirable content.
|
67 |
+
|
68 |
+
## Recommendations
|
69 |
+
Users of this model are advised to:
|
70 |
+
- Clearly define the scope and ethical boundaries of their research or educational projects.
|
71 |
+
- Implement robust content moderation and filtering mechanisms when analyzing the model's outputs.
|
72 |
+
- Engage with ethical review boards or oversight committees when planning research involving this model.
|
73 |
+
|
74 |
#### Running the model on a CPU
|
75 |
|
76 |
|
|
|
87 |
print(tokenizer.decode(outputs[0]))
|
88 |
```
|
89 |
|
|
|
90 |
#### Running the model on a single / multi GPU
|
91 |
|
|
|
92 |
```python
|
93 |
# pip install accelerate
|
94 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
176 |
print(tokenizer.decode(outputs[0]))
|
177 |
```
|
178 |
|
|
|
179 |
#### Other optimizations
|
180 |
|
181 |
* _Flash Attention 2_
|
|
|
246 |
summarized.
|
247 |
* **Output:** Generated English-language text in response to the input, such
|
248 |
as an answer to a question, or a summary of a document.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
249 |
|
250 |
## Model Card Authors
|
251 |
[More Information Needed]
|