What is this?

Another 22B model. Decent, I think? I tested with Q4_K_S, so can't tell the real performance of it. The choice is your! Enjoy!

Template: Mistral, specific is Mistral V3, don't use V3-Tekken. If you occurs of model start talking for you, you should use ChatML.

Merge Detail

### Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
  - model: Steelskull/MSM-MS-Cydrion-22B
merge_method: slerp
base_model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
parameters:
  t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1]
dtype: bfloat16
tokenizer_source: base

Downloads last month
27
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for DoppelReflEx/Mimicore-GreenSnake-22B

Collection including DoppelReflEx/Mimicore-GreenSnake-22B