File size: 4,425 Bytes
34608b4 104e52b 34608b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
inference: false
tags:
- gguf
- quantized
- roleplay
- multimodal
- vision
- llava
- sillytavern
- merge
- mistral
- conversational
---
# #Roleplay #Multimodal #Vision
This repository hosts GGUF-IQ-Imatrix quants for [Nitral-AI/Nyanade_Stunna-Maid-7B](https://huggingface.co./Nitral-AI/Nyanade_Stunna-Maid-7B).
This is a **#multimodal** model that also has **#vision** capabilities. <br> Read the full card information if you also want to use that functionality.
"Please read all the model information at the bottom of the card."
![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/HxOf1b4n4EyADoNIl2fOW.png)
**What does "Imatrix" mean?**
<details><summary>
⇲ Click here to expand/hide more information about this topic.
</summary>
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co./Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
</details><br>
# Vision/multimodal capabilities:
<details><summary>
⇲ Click here to expand/hide how this would work in practice in a roleplay chat.
</summary>
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/Kkfi_CizIk0ZXMRF8N5jo.jpeg)
</details><br>
<details><summary>
⇲ Click here to expand/hide what your SillyTavern Image Captions extension settings should look like.
</summary>
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UpXOnVrzvsMRYeqMaSOaa.jpeg)
</details><br>
**If you want to use vision functionality:**
* Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co./cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the **mmproj** folder in the repository.
* You can load the **mmproj file** by using the corresponding section in the interface:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
* For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
```
--mmproj your-mmproj-file.gguf
```
# Quantization information:
<details><summary>
⇲ Click here to expand/hide more information about this topic.
</summary>
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
```
**Steps performed:**
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*
</details><br>
# Original model information:
Aura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations. |