Kaoeiri's picture
Upload folder using huggingface_hub
425ed7b verified
metadata
base_model:
  - allura-org/MS-Meadowlark-22B
  - TheDrummer/Cydonia-22B-v1.2
  - Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
  - TheDrummer/Cydonia-22B-v1.1
  - anthracite-org/magnum-v4-22b
  - TheDrummer/Cydonia-22B-v1.3
  - spow12/ChatWaifu_v2.0_22B
  - Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using anthracite-org/magnum-v4-22b as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: anthracite-org/magnum-v4-22b
    parameters:
      weight: 1.0         # Magnum as the primary writing style model
      density: 0.85      # Slightly lower density to allow for blending with other models
  - model: TheDrummer/Cydonia-22B-v1.3
    parameters:
      weight: 0.3         # Reduced weight for creativity to avoid too much overlap
      density: 0.7       # Reduced density to ensure balance in creativity influence
  - model: TheDrummer/Cydonia-22B-v1.2
    parameters:
      weight: 0.2         # Extra creativity without overwhelming the narrative
      density: 0.65      # Reduced density for further balance
  - model: TheDrummer/Cydonia-22B-v1.1
    parameters:
      weight: 0.25        # Lower weight for accuracy and specific features (evil/trolling)
      density: 0.7       # Moderate density for accuracy retention without interference
  - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
    parameters:
      weight: 0.35        # Increased weight for storytelling and RP focus
      density: 0.8       # Moderate density to ensure a good storytelling influence
  - model: allura-org/MS-Meadowlark-22B
    parameters:
      weight: 0.3         # Increased weight for creativity and balanced writing
      density: 0.7       # Moderate density to ensure creativity enhances the text
  - model: spow12/ChatWaifu_v2.0_22B
    parameters:
      weight: 0.3         # Balanced weight for anime-style RP and conversational tone
      density: 0.7       # Moderate density to contribute anime style without overshadowing
  - model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
    parameters:
      weight: 0.3         # Balanced weight for Japanese context handling
      density: 0.75      # Higher density to retain Japanese context without excess influence

merge_method: dare_ties  # Using dare_ties for smoother blending across models
base_model: anthracite-org/magnum-v4-22b
parameters:
  density: 0.85          # General density to ensure an overall balanced mix
  epsilon: 0.1           # Maximum change in drop probabilities (affects model output blending)
  lambda: 1.2            # Scaling factor for final merged deltas
dtype: bfloat16