pipihand01 commited on
Commit
15266d1
·
verified ·
1 Parent(s): f783da6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -65
README.md CHANGED
@@ -1,65 +1,59 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear25/blob/main/LICENSE
4
- language:
5
- - en
6
- base_model:
7
- - Qwen/QwQ-32B-Preview
8
- - huihui-ai/QwQ-32B-Preview-abliterated
9
- tags:
10
- - chat
11
- - abliterated
12
- - uncensored
13
- - merge
14
- library_name: transformers
15
- ---
16
- This is a 25% abliterated model obtained from linear-weighted merging [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) (weight: 0.75) and [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) (weight: 0.25), using [mergekit](https://github.com/arcee-ai/mergekit).
17
-
18
- This is an experimental model, and from my preliminary experiments, this gives more natural result than Qwen's original model for sensitive contents while still maintaining refusal capability.
19
-
20
- Base on some of my experiments, I found when it come to "sensitive contents" generation, the more percentage of abliteration, the more peace but direct result it might get, and the less "conflict and disagreement".
21
-
22
- To get best effect when it come to "uncensoring", for low percentage of abliteration mixture like this one, it would be better to use it in RP or story writing without using official or other "AI assistant" prompts. E.g. You should use chat mode instead of instruct mode in [Text Generation web UI](https://github.com/oobabooga/text-generation-webui), and avoid prompting like AI assistant.
23
-
24
- From my experiments, this model removes some artificial wording of the "censorship", when prompting for RP or story writing correctly.
25
-
26
- **NOTE: I bear no responsibility for any output of this model. When properly prompted, this model may generate contents that are not suitable in some situations. Use it with your own caution.**
27
-
28
- ---
29
- base_model: []
30
- library_name: transformers
31
- tags:
32
- - mergekit
33
- - merge
34
-
35
- ---
36
- # my_QwQ-32B-Preview-abliterated-linear25
37
-
38
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
39
-
40
- ## Merge Details
41
- ### Merge Method
42
-
43
- This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
44
-
45
- ### Models Merged
46
-
47
- The following models were included in the merge:
48
- * [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated)
49
- * [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
50
-
51
- ### Configuration
52
-
53
- The following YAML configuration was used to produce this model:
54
-
55
- ```yaml
56
- models:
57
- - model: Qwen/QwQ-32B-Preview
58
- parameters:
59
- weight: 0.75
60
- - model: huihui-ai/QwQ-32B-Preview-abliterated
61
- parameters:
62
- weight: 0.25
63
- merge_method: linear
64
- dtype: bfloat16
65
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear25/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ base_model:
7
+ - Qwen/QwQ-32B-Preview
8
+ - huihui-ai/QwQ-32B-Preview-abliterated
9
+ tags:
10
+ - chat
11
+ - abliterated
12
+ - uncensored
13
+ - mergekit
14
+ - merge
15
+ library_name: transformers
16
+ ---
17
+ This is a 25% abliterated model obtained from linear-weighted merging [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) (weight: 0.75) and [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) (weight: 0.25), using [mergekit](https://github.com/arcee-ai/mergekit).
18
+
19
+ This is an experimental model, and from my preliminary experiments, this gives more natural result than Qwen's original model for sensitive contents while still maintaining refusal capability.
20
+
21
+ Base on some of my experiments, I found when it come to "sensitive contents" generation, the more percentage of abliteration, the more peace but direct result it might get, and the less "conflict and disagreement".
22
+
23
+ To get best effect when it come to "uncensoring", for low percentage of abliteration mixture like this one, it would be better to use it in RP or story writing without using official or other "AI assistant" prompts. E.g. You should use chat mode instead of instruct mode in [Text Generation web UI](https://github.com/oobabooga/text-generation-webui), and avoid prompting like AI assistant.
24
+
25
+ From my experiments, this model removes some artificial wording of the "censorship", when prompting for RP or story writing correctly.
26
+
27
+ **NOTE: I bear no responsibility for any output of this model. When properly prompted, this model may generate contents that are not suitable in some situations. Use it with your own caution.**
28
+
29
+ ---
30
+ # my_QwQ-32B-Preview-abliterated-linear25
31
+
32
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
33
+
34
+ ## Merge Details
35
+ ### Merge Method
36
+
37
+ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
38
+
39
+ ### Models Merged
40
+
41
+ The following models were included in the merge:
42
+ * [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated)
43
+ * [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
44
+
45
+ ### Configuration
46
+
47
+ The following YAML configuration was used to produce this model:
48
+
49
+ ```yaml
50
+ models:
51
+ - model: Qwen/QwQ-32B-Preview
52
+ parameters:
53
+ weight: 0.75
54
+ - model: huihui-ai/QwQ-32B-Preview-abliterated
55
+ parameters:
56
+ weight: 0.25
57
+ merge_method: linear
58
+ dtype: bfloat16
59
+ ```