--- inference: false language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 - not-for-all-audiences --- # Model Card: mythospice-70b This is a Llama 2-based model consisting of a merge of several models using SLERP: - [jondurbin/spicyboros-70b-2.2](https://huggingface.co./jondurbin/spicyboros-70b-2.2) - [elinas/chronos-70b-v2](https://huggingface.co./elinas/chronos-70b-v2) - [NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co./NousResearch/Nous-Hermes-Llama2-70b) ## Usage: Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format: ``` ### Instruction: ### Input: ### Response: ``` ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details This model is a merge. Please refer to the link repositories of the merged models for details.