--- base_model: - huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned - prithivMLmods/Primal-Mini-3B-Exp - MaziyarPanahi/calme-3.3-llamaloi-3b - prithivMLmods/Llama-Deepsync-3B - lunahr/thea-3b-25r library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned](https://huggingface.co./huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned) as a base. ### Models Merged The following models were included in the merge: * [prithivMLmods/Primal-Mini-3B-Exp](https://huggingface.co./prithivMLmods/Primal-Mini-3B-Exp) * [MaziyarPanahi/calme-3.3-llamaloi-3b](https://huggingface.co./MaziyarPanahi/calme-3.3-llamaloi-3b) * [prithivMLmods/Llama-Deepsync-3B](https://huggingface.co./prithivMLmods/Llama-Deepsync-3B) * [lunahr/thea-3b-25r](https://huggingface.co./lunahr/thea-3b-25r) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock models: - model: lunahr/thea-3b-25r parameters: weight: 1.0 - model: MaziyarPanahi/calme-3.3-llamaloi-3b parameters: weight: 1.0 - model: prithivMLmods/Llama-Deepsync-3B parameters: weight: 1.0 - model: prithivMLmods/Primal-Mini-3B-Exp parameters: weight: 1.0 base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned dtype: float16 normalize: true ```