--- base_model: - Replete-AI/L3-Pneuma-8B - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co./ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) as a base. ### Models Merged The following models were included in the merge: * [Replete-AI/L3-Pneuma-8B](https://huggingface.co./Replete-AI/L3-Pneuma-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml out_dtype: bfloat16 dtype: float32 tokenizer_source: base merge_method: della_linear parameters: int8_mask: true density: 0.5 epsilon: 0.04 lambda: 1.05 base_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 models: - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 parameters: weight: - filter: v_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: o_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: up_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: gate_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: down_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - value: 1 - model: Replete-AI/L3-Pneuma-8B parameters: weight: - filter: v_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: o_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: up_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: gate_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: down_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - value: 0 ```