aashish1904 commited on
Commit
b936011
1 Parent(s): 4f30ed8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - meta-llama/Meta-Llama-3.1-8B-Instruct
6
+ - grimjim/Llama-3-Instruct-abliteration-LoRA-8B
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ license: llama3.1
12
+ pipeline_tag: text-generation
13
+
14
+ ---
15
+
16
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
17
+
18
+ # QuantFactory/Llama-3.1-8B-Instruct-abliterated_via_adapter-GGUF
19
+ This is quantized version of [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter) created using llama.cpp
20
+
21
+ # Original Model Card
22
+
23
+ # Llama-3.1-8B-Instruct-abliterated_via_adapter
24
+
25
+ This model is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
26
+
27
+ A LoRA was applied to "abliterate" refusals in [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). The result appears to work despite the LoRA having been derived from Llama 3 instead of Llama 3.1, which implies that there is significant feature commonality between the 3 and 3.1 models.
28
+
29
+ The LoRA was extracted from [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
30
+
31
+ Built with Llama.
32
+
33
+ ## Merge Details
34
+ ### Merge Method
35
+
36
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B) as a base.
37
+
38
+ ### Configuration
39
+
40
+ The following YAML configuration was used to produce this model:
41
+
42
+ ```yaml
43
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
44
+ dtype: bfloat16
45
+ merge_method: task_arithmetic
46
+ parameters:
47
+ normalize: false
48
+ slices:
49
+ - sources:
50
+ - layer_range: [0, 32]
51
+ model: meta-llama/Meta-Llama-3.1-8B-Instruct+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
52
+ parameters:
53
+ weight: 1.0
54
+
55
+ ```
56
+