--- base_model: v000000/L3-Umbral-Storm-8B-t0.0001 library_name: transformers tags: - merge - llama - not-for-all-audiences - llama-cpp --- # v000000/L3-Umbral-Storm-8B-t0.0001-Q8_0-GGUF This model was converted to GGUF format from [`v000000/L3-Umbral-Storm-8B-t0.0001`](https://huggingface.co./v000000/L3-Umbral-Storm-8B-t0.0001) using llama.cpp Refer to the [original model card](https://huggingface.co./v000000/L3-Umbral-Storm-8B-t0.0001) for more details on the model. # Llama-3-Umbral-Storm-8B (8K) (GGUF) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/U1SpJquvTpW_f-lCA86wW.png) RP model, "L3-Umbral-Mind-v2.0" as a base, nearswapped with one of the smartest L3.1 models "Storm". * Warning: Based on Mopey-Mule so it should be negative, don't use this model for any truthful information or advice. ------------------------------------------------------------------------------- ## merge This is a merge of pre-trained language models. ## Merge Details This model is on the Llama-3 arch with Llama-3.1 merged in, so it has 8k context length. But could possibly be extended slightly with RoPE due to the L3.1 layers. ### Merge Method This model was merged using the NEARSWAP t0.0001 merge algorithm. ### Models Merged The following models were included in the merge: * Base Model: [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co./Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B) * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co./akjindal53244/Llama-3.1-Storm-8B) ### Configuration ```yaml slices: - sources: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B layer_range: [0, 32] - model: akjindal53244/Llama-3.1-Storm-8B layer_range: [0, 32] merge_method: nearswap base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B parameters: t: - value: 0.0001 dtype: bfloat16 ``` # Prompt Template: ```bash <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ```