This is a repository of GGUF Quants for DareBeagel-2x7B

Original Model Available Here: https://huggingface.co./shadowml/DareBeagel-2x7B

Available Quants

  • Q8_0
  • Q6_K
  • Q5_K_M
  • Q5_K_S
  • Q4_K_M
  • Q4_K_S
  • Q3_K_M
  • Q3_K_S
  • Q2_K

Beyonder-2x7B-v2

Beyonder-2x7B-v2 is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: mlabonne/NeuralBeagle14-7B
gate_mode: random
experts:
  - source_model: mlabonne/NeuralBeagle14-7B
    positive_prompts: [""]
  - source_model: mlabonne/NeuralDaredevil-7B
    positive_prompts: [""]

πŸ’» Usage

Load in Kobold.cpp or whatever.
I found Alpaca (and Alpaca-ish) prompts worked well.
Settings that worked good for me are:

Min P - 0.1
Dynamic Temperature Min 0 Max 3
Rep Pen 1.03
Rep Pen Range 1000
Downloads last month
123
GGUF
Model size
12.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .