Lewdiculous
commited on
Commit
•
adbf205
1
Parent(s):
ede4739
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: other
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- gguf
|
8 |
+
- quantized
|
9 |
+
- roleplay
|
10 |
+
- imatrix
|
11 |
+
- mistral
|
12 |
+
- merge
|
13 |
+
inference: false
|
14 |
+
base_model:
|
15 |
+
- Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
|
16 |
+
- Epiculous/Mika-7B
|
17 |
+
---
|
18 |
+
|
19 |
+
This repository hosts GGUF-Imatrix quantizations for [Test157t/Mika-Longtext-7b](https://huggingface.co/Test157t/Mika-Longtext-7b).
|
20 |
+
|
21 |
+
Could work better for longer context sizes. Maybe.
|
22 |
+
|
23 |
+
**What does "Imatrix" mean?**
|
24 |
+
|
25 |
+
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
|
26 |
+
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
|
27 |
+
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
|
28 |
+
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
29 |
+
|
30 |
+
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).
|
31 |
+
|
32 |
+
**Steps:**
|
33 |
+
```
|
34 |
+
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
|
35 |
+
```
|
36 |
+
**Quants:**
|
37 |
+
```python
|
38 |
+
quantization_options = [
|
39 |
+
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
|
40 |
+
"Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
|
41 |
+
]
|
42 |
+
```
|
43 |
+
|
44 |
+
If you want anything that's not here or another model, feel free to request.
|
45 |
+
|
46 |
+
**Original model information:**
|
47 |
+
|
48 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/dEotiYfMftpO71nbseG-q.jpeg)
|
49 |
+
|
50 |
+
This model was merged using the SLERP merge method.
|
51 |
+
|
52 |
+
### Models Merged
|
53 |
+
|
54 |
+
The following models were included in the merge:
|
55 |
+
* [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)
|
56 |
+
* [Epiculous/Mika-7B](https://huggingface.co/Epiculous/Mika-7B)
|
57 |
+
|
58 |
+
### Configuration
|
59 |
+
|
60 |
+
The following YAML configuration was used to produce this model:
|
61 |
+
|
62 |
+
```yaml
|
63 |
+
slices:
|
64 |
+
- sources:
|
65 |
+
- model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
|
66 |
+
layer_range: [0, 32]
|
67 |
+
- model: Epiculous/Mika-7B
|
68 |
+
layer_range: [0, 32]
|
69 |
+
merge_method: slerp
|
70 |
+
base_model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
|
71 |
+
parameters:
|
72 |
+
t:
|
73 |
+
- filter: self_attn
|
74 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
75 |
+
- filter: mlp
|
76 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
77 |
+
- value: 0.5
|
78 |
+
dtype: bfloat16
|
79 |
+
```
|