--- base_model: - mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated - meta-llama/Meta-Llama-3.1-8B-Instruct - meta-llama/Meta-Llama-3.1-8B library_name: transformers tags: - mergekit - merge license: llama3.1 --- # outputModels This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della merge method using [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co./meta-llama/Meta-Llama-3.1-8B) as a base. ### Models Merged The following models were included in the merge: * [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co./mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) * [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3.1-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated parameters: weight: 1 - model: meta-llama/Meta-Llama-3.1-8B-Instruct parameters: weight: 1 merge_method: della base_model: meta-llama/Meta-Llama-3.1-8B parameters: normalize: false int8_mask: true density: 0.7 lambda: 1.1 epsilon: 0.25 dtype: float16 ```