MN-Halide-12b-v1.0 / README.md
Azazelle's picture
Update README.md
b2cef3e verified
metadata
base_model:
  - TheDrummer/Rocinante-12B-v1
  - jtatman/mistral_nemo_12b_reasoning_psychology_lora
  - Epiculous/Azure_Dusk-v0.2
  - nbeerbower/mistral-nemo-bophades-12B
  - Epiculous/Crimson_Dawn-v0.2
  - nbeerbower/mistral-nemo-wissenschaft-12B
  - anthracite-org/magnum-v2-12b
  - TheDrummer/Rocinante-12B-v1.1
  - anthracite-org/magnum-v2.5-12b-kto
  - mpasila/Mistral-freeLiPPA-LoRA-12B
  - anthracite-org/magnum-v2.5-12b-kto
  - jeiku/Aura-NeMo-12B
  - nbeerbower/mistral-nemo-cc-12B
  - UsernameJustAnother/Nemo-12B-Marlin-v8
  - elinas/Chronos-Gold-12B-1.0
  - SillyTilly/mistralai_Mistral-Nemo-Base-2407
  - nbeerbower/Lyra4-Gutenberg-12B
  - nbeerbower/mistral-nemo-gutenberg-12B-v4
library_name: transformers
tags:
  - mergekit
  - merge
license: apache-2.0

MN-Halide-12b-v1.0

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using SillyTilly/mistralai_Mistral-Nemo-Base-2407 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: SillyTilly/mistralai_Mistral-Nemo-Base-2407
dtype: float32
merge_method: model_stock
slices:
- sources:
  - layer_range: [0, 40]
    model: nbeerbower/Lyra4-Gutenberg-12B
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-gutenberg-12B-v4
  - layer_range: [0, 40]
    model: elinas/Chronos-Gold-12B-1.0
  - layer_range: [0, 40]
    model: UsernameJustAnother/Nemo-12B-Marlin-v8
  - layer_range: [0, 40]
    model: TheDrummer/Rocinante-12B-v1.1
  - layer_range: [0, 40]
    model: Epiculous/Azure_Dusk-v0.2
  - layer_range: [0, 40]
    model: Epiculous/Crimson_Dawn-v0.2
  - layer_range: [0, 40]
    model: TheDrummer/Rocinante-12B-v1+jtatman/mistral_nemo_12b_reasoning_psychology_lora
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-wissenschaft-12B
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-bophades-12B
  - layer_range: [0, 40]
    model: anthracite-org/magnum-v2.5-12b-kto+mpasila/Mistral-freeLiPPA-LoRA-12B
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-cc-12B
  - layer_range: [0, 40]
    model: anthracite-org/magnum-v2-12b
  - layer_range: [0, 40]
    model: anthracite-org/magnum-v2.5-12b-kto+jeiku/Aura-NeMo-12B
  - layer_range: [0, 40]
    model: SillyTilly/mistralai_Mistral-Nemo-Base-2407
tokenizer_source: unsloth/Mistral-Nemo-Base-2407