Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ This fine-tuning process significantly improves the model's ability to produce c
|
|
30 |
* **Potential Biases:** Like all language models, Everyday-Language-3B may inherit biases from its pre-training data and the fine-tuning dataset. These biases can manifest in the generated text, potentially leading to outputs that reflect societal stereotypes or unfair assumptions.
|
31 |
* **Factuality:** The model may generate text that is not factually accurate, especially when dealing with complex or nuanced topics. It's crucial to verify information generated by the model before relying on it.
|
32 |
* **Repetition:** Although significantly reduced due to fine-tuning, the model may still exhibit some repetition in longer generated text.
|
33 |
-
|
34 |
## Training Data
|
35 |
|
36 |
Everyday-Language-3B was fine-tuned on the **Everyday-Language-Corpus** dataset, which is publicly available on Hugging Face:
|
|
|
30 |
* **Potential Biases:** Like all language models, Everyday-Language-3B may inherit biases from its pre-training data and the fine-tuning dataset. These biases can manifest in the generated text, potentially leading to outputs that reflect societal stereotypes or unfair assumptions.
|
31 |
* **Factuality:** The model may generate text that is not factually accurate, especially when dealing with complex or nuanced topics. It's crucial to verify information generated by the model before relying on it.
|
32 |
* **Repetition:** Although significantly reduced due to fine-tuning, the model may still exhibit some repetition in longer generated text.
|
33 |
+
* **Creativity:** The model demonstrates limited creativity in generating text. While it can produce coherent and contextually appropriate responses in factual or informational domains, it struggles with tasks that require imagination, originality, and nuanced storytelling. It tends to produce predictable outputs and may have difficulty generating text that deviates significantly from patterns present in its training data. This limitation makes it less suitable for applications such as creative writing, poetry generation, or other tasks that demand a high degree of imaginative output.
|
34 |
## Training Data
|
35 |
|
36 |
Everyday-Language-3B was fine-tuned on the **Everyday-Language-Corpus** dataset, which is publicly available on Hugging Face:
|