Mistralified: Barycentric based embedding swap applied with token surgery + config change. Uses Captain_BMO as the donor model ~ [no additional training]

image/png

"Where it roams, comprehension falters and the air thickens with the maddening pulse of algorithms far too vast. Eyes it does not possess; for its sight is a network of intent, wrapping the unseen in their grasp."

image/png

The following models were included in the merge:

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420
        layer_range: [0, 40]
      - model: LatitudeGames/Wayfarer-12B
        layer_range: [0, 40]
merge_method: slerp
base_model: Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420
parameters:
  t:
    - filter: self_attn
      value: [0, 0.4, 0.2, 0.6, 0.9]
    - filter: mlp
      value: [1, 0.6, 0.8, 0.4, 0.1]
    - value: 0.4206911
dtype: bfloat16
Downloads last month
22
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Nitral-Archive/Wayfarer_Eris_Noctis-Mistralified-12B

Finetuned
(1)
this model
Quantizations
4 models