--- base_model: - Undi95/Meta-Llama-3-8B-Instruct-hf library_name: transformers tags: - mergekit - merge --- # Llama-3-8B-Ultra-Instruct This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co./Undi95/Meta-Llama-3-8B-Instruct-hf) as a base. ### Models Merged The following models were included in the merge: * llama-3-8B-ultra-instruct/InstructPart * llama-3-8B-ultra-instruct/RPPart ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Undi95/Meta-Llama-3-8B-Instruct-hf dtype: bfloat16 merge_method: dare_ties slices: - sources: - layer_range: [0, 32] model: llama-3-8B-ultra-instruct/RPPart parameters: weight: 0.39 - layer_range: [0, 32] model: llama-3-8B-ultra-instruct/InstructPart parameters: weight: 0.26 - layer_range: [0, 32] model: Undi95/Meta-Llama-3-8B-Instruct-hf ```