merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.
Models Merged
The following models were included in the merge:
- Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
- anthracite-org/magnum-v4-22b
- unsloth/Mistral-Small-Instruct-2409 + rAIfle/Acolyte-LORA
- InferenceIllusionist/SorcererLM-22B
- TheDrummer/Cydonia-22B-v1.1
- TheDrummer/Cydonia-22B-v1.2
- Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
- unsloth/Mistral-Small-Instruct-2409 + Kaoeiri/Moingooistrial-22B-V1-Lora
- byroneverson/Mistral-Small-Instruct-2409-abliterated
- spow12/ChatWaifu_v2.0_22B
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
- TheDrummer/Cydonia-22B-v1.3
- allura-org/MS-Meadowlark-22B
- crestf411/MS-sunfall-v0.7.0
Configuration
The following YAML configuration was used to produce this model:
models:
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 1.0 # Primary model for human-like writing
density: 0.88 # Solid foundation for clear, balanced text generation
- model: TheDrummer/Cydonia-22B-v1.3
parameters:
weight: 0.26 # Slightly reduced weight for creativity
density: 0.7 # Matches revised influence for subtle creativity
- model: TheDrummer/Cydonia-22B-v1.2
parameters:
weight: 0.16 # Reduced weight to dial back creativity overlap
density: 0.68 # Harmonized with the roleplay model reductions
- model: TheDrummer/Cydonia-22B-v1.1
parameters:
weight: 0.18 # Further reduced to minimize intrusive elements
density: 0.68 # Balanced density for roleplay accuracy
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
parameters:
weight: 0.28 # Reduced for less dominance of storytelling tropes
density: 0.77 # Adjusted density for smoother integration
- model: allura-org/MS-Meadowlark-22B
parameters:
weight: 0.3 # Retained for its balanced creativity
density: 0.72 # Supports descriptive fluency and accuracy
- model: spow12/ChatWaifu_v2.0_22B
parameters:
weight: 0.27 # Intact to retain anime-style RP nuance
density: 0.7 # Unmodified for balance with other models
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
parameters:
weight: 0.2 # Slight reduction to balance Japanese context influence
density: 0.58 # Fine-tuned to support overall coherence
- model: crestf411/MS-sunfall-v0.7.0
parameters:
weight: 0.25 # Reduced weight for a subtler dramatic tone
density: 0.74 # Balanced density for smoother blending
- model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA
parameters:
weight: 0.24 # Slight reduction for subtler varied content inputs
density: 0.7 # Aligned density for balanced integration
- model: InferenceIllusionist/SorcererLM-22B
parameters:
weight: 0.23 # Reduced for a more cohesive stylistic approach
density: 0.74 # Matches weight reduction for smoother outputs
- model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora
parameters:
weight: 0.26 # Slightly dialed back for monster and mythical content
density: 0.72 # Balanced for seamless integration
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.12 # Light touch to prevent overheating
density: 0.65 # Low density to avoid conflict with roleplay-heavy models
- model: byroneverson/Mistral-Small-Instruct-2409-abliterated
parameters:
weight: 0.15 # Introduces unfiltered context without dominating the mix
density: 0.7 # Moderate density to retain raw contextual coherence
merge_method: dare_ties # Optimal for diverse and complex model blending
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
density: 0.85 # Overall density ensures logical and creative balance
epsilon: 0.09 # Small step size for smooth blending
lambda: 1.22 # Slightly adjusted scaling for refined sharpness
dtype: bfloat16
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2RP-Cydonia-vXXX-22B-7
Merge model
this model