Edit model card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: shenzhi-wang/Llama3-8B-Chinese-Chat
        layer_range: [0, 28]
  - sources:
      - model: hfl/llama-3-chinese-8b-instruct-v2
        layer_range: [5, 28]
        parameters:
          scale:
            - filter: o_proj
              value: 0.0
            - filter: down_proj
              value: 0.0
            - value: 1.0
  - sources:
      - model: NousResearch/Hermes-2-Pro-Llama-3-8B
        layer_range: [28, 32]
merge_method: passthrough
dtype: bfloat16
Downloads last month
3
Safetensors
Model size
13B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for mergekit-community/mergekit-passthrough-dmirwnd