|
--- |
|
license: unlicense |
|
datasets: |
|
- molbal/horror-novel-chunks |
|
language: |
|
- en |
|
library_name: peft |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# molbal/horrorllama3-8b-v1.0 model card |
|
|
|
This model is a fine-tuned variant of the llama3-8b. It was specifically trained on a dataset of horror novels obtained from Project Gutenberg, |
|
a public domain digital library platform. It is the result of following a guide: https://github.com/molbal/llm-text-completion-finetune |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/638a3f5a8d7cce863e70a91b/UHm-ZptYr9VpK52xu9MNX.jpeg) |
|
|
|
## Model Details |
|
|
|
### Training |
|
|
|
The model was fine-tuned following the pipeline guide and the horror novels dataset created from public domain books available on Project Gutenberg with the topic marked as "horror". Training and dataset acquisition scripts are available at https://github.com/molbal/llm-text-completion-finetune |
|
|
|
|
|
## Intended Use |
|
|
|
This model is an educational/practice finetune, due to lack of proper data cleaning, it is not recommended for production use. |
|
|
|
|
|
## Limitations |
|
|
|
This model is a text completion model, meaning it will generate textual content that organically continues the given prompt. It does not respond to instructions or answer questions, as it is not an instruct/chat model type like ChatGPT. While the model is fine-tuned for generating horror-themed content, the generated text's relevance and quality can still depend on the provided prompt. This model does not have the ability to verify facts or provide accurate information. Inference times and resource usage may vary depending on the infrastructure where the model is deployed. |
|
|