Lewdiculous
commited on
Commit
•
0d67f2c
1
Parent(s):
925e3a6
Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ The **Imatrix** is calculated based on calibration data, and it helps determine
|
|
31 |
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
|
32 |
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
33 |
|
34 |
-
For imatrix data generation, kalomaze's `groups_merged.txt` with additional roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/
|
35 |
|
36 |
</details><br>
|
37 |
|
|
|
31 |
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
|
32 |
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
33 |
|
34 |
+
For imatrix data generation, kalomaze's `groups_merged.txt` with additional roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Nyanade_Stunna-Maid-7B-GGUF-IQ-Imatrix/blob/main/imatrix-with-rp-ex.txt). This was just to add a bit more diversity to the data with the intended use case in mind.
|
35 |
|
36 |
</details><br>
|
37 |
|