Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,13 @@ tags:
|
|
8 |
- mergekit
|
9 |
---
|
10 |
|
11 |
-
# Llama-3.1-70B-Instruct-
|
12 |
|
13 |
![KhorYYG.png](https://i.imgur.com/KhorYYG.png)
|
14 |
|
15 |
This is an uncensored version of [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it) using [@grimjim](https://huggingface.co/grimjim)'s recipe.
|
16 |
|
17 |
-
More precisely, this is a **LoRA-abliterated** model:
|
18 |
|
19 |
1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3 and an abliterated Llama 3
|
20 |
2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to a censored Llama 3.1 to abliterate it.
|
@@ -61,5 +61,5 @@ pip install bitsandbytes
|
|
61 |
mergekit-extract-lora failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 meta-llama/Meta-Llama-3-70B-Instruct Llama-3-70B-Instruct-abliterated-LORA --rank=64
|
62 |
|
63 |
# Merge using previous config
|
64 |
-
mergekit-yaml config.yaml Llama-3.1-70B-Instruct-
|
65 |
```
|
|
|
8 |
- mergekit
|
9 |
---
|
10 |
|
11 |
+
# Llama-3.1-70B-Instruct-lorablated
|
12 |
|
13 |
![KhorYYG.png](https://i.imgur.com/KhorYYG.png)
|
14 |
|
15 |
This is an uncensored version of [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it) using [@grimjim](https://huggingface.co/grimjim)'s recipe.
|
16 |
|
17 |
+
More precisely, this is a **LoRA-abliterated** (lorablated) model:
|
18 |
|
19 |
1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3 and an abliterated Llama 3
|
20 |
2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to a censored Llama 3.1 to abliterate it.
|
|
|
61 |
mergekit-extract-lora failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 meta-llama/Meta-Llama-3-70B-Instruct Llama-3-70B-Instruct-abliterated-LORA --rank=64
|
62 |
|
63 |
# Merge using previous config
|
64 |
+
mergekit-yaml config.yaml Llama-3.1-70B-Instruct-lorablated --allow-crimes --lora-merge-cache=./cache
|
65 |
```
|