--- base_model: - failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 library_name: transformers tags: - mergekit - merge license: llama3 --- # merge This is a testing model using the zeroing method used by [elinas/Llama-3-15B-Instruct-zeroed](https://huggingface.co./elinas/Llama-3-15B-Instruct-zeroed). If this model pans out in the way I hope, Ill heal it then reupload with a custom model card like the others. currently this is just an experiment. In case anyone asks AbL3In-15b literally means: ```yaml Ab = Abliterated L3 = Llama-3 In = Instruct 15b = its 15b perameters ``` ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co./failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 24] model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 - sources: - layer_range: [8, 24] model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [8, 24] model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [24, 32] model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 ```