|
--- |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- roleplay |
|
- mistral |
|
inference: false |
|
license: apache-2.0 |
|
--- |
|
|
|
This repository hosts GGUF-IQ-Imatrix quants for [vicgalle/RoleBeagle-11B](https://huggingface.co./vicgalle/RoleBeagle-11B). |
|
|
|
**What does "Imatrix" mean?** |
|
|
|
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. |
|
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. |
|
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. |
|
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
|
|
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co./Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data. |
|
|
|
**Steps:** |
|
|
|
``` |
|
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) |
|
``` |
|
*Using the latest llama.cpp at the time.* |
|
|
|
```python |
|
quantization_options = [ |
|
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S", |
|
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" |
|
] |
|
``` |
|
|
|
## Original model information: |
|
|
|
# RoleBeagle-11B |
|
|
|
![img](https://cdn-uploads.huggingface.co/production/uploads/63df7c44f0c75dfb876272c0/NWE9GsHfROv-1_fLTBpas.png) |
|
|
|
A DPO-finetune from [vicgalle/CarbonBeagle-11B-truthy](https://huggingface.co./vicgalle/CarbonBeagle-11B-truthy) over a subset of OpenHermesPreferences containting RP conversations. |
|
It keeps most of the intelligence from CarbonBeagle-11B, and hopefuly can role-play better. |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_vicgalle__RoleBeagle-11B) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |76.06| |
|
|AI2 Reasoning Challenge (25-Shot)|72.35| |
|
|HellaSwag (10-Shot) |89.77| |
|
|MMLU (5-Shot) |66.35| |
|
|TruthfulQA (0-shot) |77.92| |
|
|Winogrande (5-shot) |84.06| |
|
|GSM8k (5-shot) |65.88| |