--- base_model: - nvidia/Llama-3.1-Nemotron-70B-Instruct-HF - mlabonne/Llama-3-70B-Instruct-abliterated-LORA library_name: transformers tags: - mergekit - merge license: llama3.1 --- ![image/png](https://huggingface.co./nbeerbower/Llama-3.1-Nemotron-lorablated-70B/resolve/main/nemotron.png?download=true) # Llama-3.1-Nemotron-lorablated-70B An uncensored version of [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co./nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) created by merging [mlabonne/Llama-3-70B-Instruct-abliterated-LORA](https://huggingface.co./mlabonne/Llama-3-70B-Instruct-abliterated-LORA) using [task arithmetic](https://arxiv.org/abs/2212.04089). ## Method This model was created using [mergekit](https://github.com/cg123/mergekit). From Ubuntu 24.04 (as root): ``` apt update apt install pipx git clone https://github.com/arcee-ai/mergekit.git cd mergekit && pipx install -e . mergekit-yaml config.yaml Llama-3.1-Nemotron-lorablated-70B --allow-crimes --lora-merge-cache=./cache ``` See [@mlabonne](https://huggingface.co./mlabonne)'s [Llama-3.1-70B-Instruct-lorablated](https://huggingface.co./mlabonne/Llama-3.1-70B-Instruct-lorablated) for more details on how the LoRA was extracted. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF+mlabonne/Llama-3-70B-Instruct-abliterated-LORA dtype: bfloat16 merge_method: task_arithmetic parameters: normalize: false slices: - sources: - layer_range: [0, 80] model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF+mlabonne/Llama-3-70B-Instruct-abliterated-LORA parameters: weight: 1.0 ``` ### Acknowlegements Thanks to [@mlabonne](https://huggingface.co./mlabonne), [@grimjim](https://huggingface.co./grimjim), and [@failspy](https://huggingface.co./failspy) for pioneering this technique for uncensoring models. Compute provided by [Hetzner](https://www.hetzner.com/) and funded by [Schneewolf Labs](https://schneewolflabs.com/).