ABX-AI commited on
Commit
1370a9c
1 Parent(s): 5ebe111

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - KatyTheCutie/LemonadeRP-4.5.3
4
+ - Nitral-AI/Infinitely-Laydiculous-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+
10
+ ---
11
+ # GGUF / IQ / Imatrix for [Infinite-Laymons-7B](https://huggingface.co/ABX-AI/Infinite-Laymons-7B)
12
+
13
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/4rXva2I6VB07XtR_H6Orc.png)
14
+
15
+ **Why Importance Matrix?**
16
+
17
+ **Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy.
18
+ The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
19
+
20
+ Related discussions in Github:
21
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
22
+
23
+ The imatrix.txt file that I used contains general, semi-random data, with some custom kink.
24
+
25
+
26
+ # Infinite-Laymons-7B
27
+
28
+ This model is intended for fictional storytelling and role-playing, with a focus on more original conversations and less alignment.
29
+
30
+ ## Merge Details
31
+
32
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
33
+
34
+ ### Merge Method
35
+
36
+ This model was merged using the SLERP merge method.
37
+
38
+ ### Models Merged
39
+
40
+ The following models were included in the merge:
41
+ * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
42
+ * [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
43
+
44
+ ### Configuration
45
+
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ ```yaml
49
+ slices:
50
+ - sources:
51
+ - model: KatyTheCutie/LemonadeRP-4.5.3
52
+ layer_range: [0, 32]
53
+ - model: Nitral-AI/Infinitely-Laydiculous-7B
54
+ layer_range: [0, 32]
55
+ merge_method: slerp
56
+ base_model: Nitral-AI/Infinitely-Laydiculous-7B
57
+ parameters:
58
+ t:
59
+ - filter: self_attn
60
+ value: [0.7, 0.3, 0.6, 0.2, 0.5]
61
+ - filter: mlp
62
+ value: [0.3, 0.7, 0.4, 0.8, 0.5]
63
+ - value: 0.5
64
+ dtype: bfloat16
65
+
66
+ ```