Lewdiculous commited on
Commit
925e3a6
1 Parent(s): 5462fec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -38
README.md CHANGED
@@ -12,12 +12,10 @@ tags:
12
  - mistral
13
  - conversational
14
  ---
15
- # #Roleplay #Multimodal #Vision
16
 
17
  This repository hosts GGUF-IQ-Imatrix quants for [ResplendentAI/Aura_7B](https://huggingface.co/ResplendentAI/Aura_7B).
18
 
19
- This is a **#multimodal** model that also has **#vision** capabilities. <br> Read the full card information if you also want to use that functionality.
20
-
21
  "Please read all the model information at the bottom of the card."
22
 
23
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/HxOf1b4n4EyADoNIl2fOW.png)
@@ -33,44 +31,10 @@ The **Imatrix** is calculated based on calibration data, and it helps determine
33
  The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
34
  [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
35
 
36
- For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
37
 
38
  </details><br>
39
 
40
- # Vision/multimodal capabilities:
41
-
42
- <details><summary>
43
- ⇲ Click here to expand/hide how this would work in practice in a roleplay chat.
44
- </summary>
45
-
46
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/Kkfi_CizIk0ZXMRF8N5jo.jpeg)
47
-
48
- </details><br>
49
-
50
- <details><summary>
51
- ⇲ Click here to expand/hide what your SillyTavern Image Captions extension settings should look like.
52
- </summary>
53
-
54
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UpXOnVrzvsMRYeqMaSOaa.jpeg)
55
-
56
- </details><br>
57
-
58
- **If you want to use vision functionality:**
59
-
60
- * Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
61
-
62
- To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the **mmproj** folder in the repository.
63
-
64
- * You can load the **mmproj file** by using the corresponding section in the interface:
65
-
66
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
67
-
68
- * For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
69
-
70
- ```
71
- --mmproj your-mmproj-file.gguf
72
- ```
73
-
74
  # Quantization information:
75
 
76
 
 
12
  - mistral
13
  - conversational
14
  ---
15
+ # #Roleplay #Writing #Creative
16
 
17
  This repository hosts GGUF-IQ-Imatrix quants for [ResplendentAI/Aura_7B](https://huggingface.co/ResplendentAI/Aura_7B).
18
 
 
 
19
  "Please read all the model information at the bottom of the card."
20
 
21
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/HxOf1b4n4EyADoNIl2fOW.png)
 
31
  The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
32
  [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
33
 
34
+ For imatrix data generation, kalomaze's `groups_merged.txt` with additional roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
35
 
36
  </details><br>
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  # Quantization information:
39
 
40