--- base_model: - Undi95/Llama-3-Unholy-8B - meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B-Instruct - taozi555/Llama-3-8B-Instruct-pippa library_name: transformers tags: - mergekit - merge --- # output_model_merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Undi95/Llama-3-Unholy-8B](https://huggingface.co./Undi95/Llama-3-Unholy-8B) * [taozi555/Llama-3-8B-Instruct-pippa](https://huggingface.co./taozi555/Llama-3-8B-Instruct-pippa) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic base_model: meta-llama/Meta-Llama-3-8B-Instruct models: - model: meta-llama/Meta-Llama-3-8B-Instruct - model: taozi555/Llama-3-8B-Instruct-pippa parameters: weight: 0.42 - model: Undi95/Llama-3-Unholy-8B parameters: weight: 0.29 - model: meta-llama/Meta-Llama-3-8B-Instruct parameters: weight: 0.48 dtype: bfloat16 ```