license: apache-2.0 | |
language: | |
- en | |
pipeline_tag: text-generation | |
inference: true | |
tags: | |
- pytorch | |
- mistral | |
- finetuned | |
This is an ExLlamaV2 quantized model in 4bpw of [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co./KoboldAI/Mistral-7B-Holodeck-1) using the default calibration dataset. | |
# Original Model card: | |
# Mistral 7B - Holodeck | |
## Model Description | |
Mistral 7B-Holodeck is a finetune created using Mistral's 7B model. | |
## Training data | |
The training data contains around 3000 ebooks in various genres. | |
Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` | |
``` | |
### Limitations and Biases | |
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). |