|
--- |
|
library_name: transformers |
|
license: other |
|
language: |
|
- en |
|
tags: |
|
- gguf |
|
- quantized |
|
- roleplay |
|
- imatrix |
|
- mistral |
|
- merge |
|
inference: false |
|
|
|
|
|
|
|
--- |
|
|
|
> [!TIP] |
|
> **Support:** <br> |
|
> My upload speeds have been cooked and unstable lately. <br> |
|
> Realistically I'd need to move to get a better provider. <br> |
|
> If you **want** and you are able to... <br> |
|
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br> |
|
> I apologize for disrupting your experience. |
|
|
|
|
|
This repository hosts GGUF-Imatrix quantizations for [ChaoticNeutrals/BuRP_7B](https://huggingface.co./ChaoticNeutrals/BuRP_7B). |
|
|
|
**What does "Imatrix" mean?** |
|
|
|
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. |
|
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. |
|
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. |
|
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
|
|
**Steps:** |
|
``` |
|
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) |
|
``` |
|
**Quants:** |
|
```python |
|
quantization_options = [ |
|
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", |
|
"Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" |
|
] |
|
``` |
|
|
|
If you want anything that's not here or another model, feel free to request. |
|
|
|
**This is experimental.** |
|
|
|
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co./Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). |
|
|
|
**Alt-image:** |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/CS8ltMyem_KeoSSVuuAdx.jpeg) |
|
|
|
**Original model information:** |
|
|
|
# BuRP |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/RsiscU77BoQSzDUJkLtYc.jpeg) |
|
|
|
So you want a model that can do it all? You've been dying to RP with a superintelligence who never refuses your advances while sticking to your strange and oddly specific dialogue format? |
|
|
|
Well, look no further because BuRP is the model you need. |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: ErisLaylaSLERP |
|
layer_range: [0, 32] |
|
- model: ParadigmInfinitySLERP |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: ParadigmInfinitySLERP |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |