Lewdiculous commited on
Commit
6eea618
·
verified ·
1 Parent(s): 0025324

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ inference: false
4
+ tags:
5
+ - gguf
6
+ - quantized
7
+ - roleplay
8
+ - multimodal
9
+ - vision
10
+ - sillytavern
11
+ - merge
12
+ - mistral
13
+ ---
14
+ This repository hosts GGUF-IQ-Imatrix quants for [Nitral-AI/Eris_PrimeV4-Vision-32k-7B](https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-32k-7B).
15
+
16
+ "More stable and with better long context handling."
17
+
18
+ This is a **#multimodal** model that also has **#vision** capabilities. Read the full card information if that is your use case.
19
+
20
+ Quants:
21
+ ```python
22
+ quantization_options = [
23
+ "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
24
+ "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
25
+ ]
26
+ ```
27
+
28
+ **What does "Imatrix" mean?**
29
+
30
+ <details><summary>
31
+ ⇲ Click here to expand/hide more information about this topic.
32
+ </summary>
33
+
34
+ It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
35
+ The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
36
+ The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
37
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
38
+
39
+ For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
40
+
41
+ </details><br>
42
+
43
+ # Vision/multimodal capabilities:
44
+
45
+ <details><summary>
46
+ ⇲ Click here to expand/hide how this would work in practice in a roleplay chat.
47
+ </summary>
48
+
49
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qGO0nIfZVcyuio5J07sU-.jpeg)
50
+
51
+ </details><br>
52
+
53
+ <details><summary>
54
+ ⇲ Click here to expand/hide what your SillyTavern Image Captions extension settings should look like.
55
+ </summary>
56
+
57
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UpXOnVrzvsMRYeqMaSOaa.jpeg)
58
+
59
+ </details><br>
60
+
61
+ **If you want to use vision functionality:**
62
+
63
+ * Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
64
+
65
+ To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the repository.
66
+
67
+ * You can load the **mmproj** by using the corresponding section in the interface:
68
+
69
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
70
+
71
+ * For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
72
+
73
+ ```
74
+ --mmproj your-mmproj-file.gguf
75
+ ```
76
+
77
+ # Quantization information:
78
+
79
+
80
+ <details><summary>
81
+ ⇲ Click here to expand/hide more information about this topic.
82
+ </summary>
83
+
84
+ **Steps performed:**
85
+
86
+ ```
87
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
88
+ ```
89
+ *Using the latest llama.cpp at the time.*
90
+
91
+ </details><br>
92
+
93
+ # Original model information:
94
+
95
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/xnDxqMZRVOAUfTSerDFJB.jpeg)
96
+
97
+ # Eris Prime: Version 4.0 32k
98
+ After many trials and tribulations we have a winner: A more coherent, format stable version of Eris Prime v4 with better long context handling.