GGUF-Imatrix quantizations for Test157t/Prima-LelantaclesV6-7b.

What does "Imatrix" mean?

It stands for Importance Matrix, a technique used to improve the quality of quantized models.

The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance.

One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse.

More information: [1] [2].

For --imatrix data, imatrix-Prima-LelantaclesV6-7b-F16.dat was used.

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)

Using llama.cpp-b2294.

The new IQ3_S quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in koboldcpp-1.59.1 or higher.

If you want any specific quantization to be added, feel free to ask.

All credits belong to the creator.

Original model information:

image/jpeg

This model was merged using the DARE TIES

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties
base_model: Test157t/Lelantacles6-Experiment26-7B
parameters:
  normalize: true
models:
  - model: Test157t/West-Pasta-Lake-7b
    parameters:
      weight: 1
  - model: Test157t/Lelantacles6-Experiment26-7B
    parameters:
      weight: 1
dtype: float16
Downloads last month
44
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Collection including Lewdiculous/Prima-LelantaclesV6-7b-GGUF-IQ-Imatrix