|
--- |
|
library_name: transformers |
|
language: |
|
- en |
|
tags: |
|
- gguf |
|
- quantized |
|
- roleplay |
|
- imatrix |
|
- mistral |
|
- merge |
|
inference: false |
|
base_model: |
|
- Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context |
|
- Epiculous/Mika-7B |
|
--- |
|
|
|
This repository hosts GGUF-Imatrix quantizations for [Test157t/Mika-Longtext-7b](https://huggingface.co./Test157t/Mika-Longtext-7b). |
|
|
|
Could work better for longer context sizes. Maybe. |
|
|
|
**What does "Imatrix" mean?** |
|
|
|
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. |
|
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. |
|
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. |
|
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
|
|
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co./Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). |
|
|
|
**Steps:** |
|
``` |
|
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) |
|
``` |
|
**Quants:** |
|
```python |
|
quantization_options = [ |
|
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", |
|
"Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" |
|
] |
|
``` |
|
|
|
If you want anything that's not here or another model, feel free to request. |
|
|
|
**Original model information:** |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/dEotiYfMftpO71nbseG-q.jpeg) |
|
|
|
This model was merged using the SLERP merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co./Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) |
|
* [Epiculous/Mika-7B](https://huggingface.co./Epiculous/Mika-7B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context |
|
layer_range: [0, 32] |
|
- model: Epiculous/Mika-7B |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |
|
|