Edit model card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
- sources:
  - layer_range: [0, 16]
    model: shenzhi-wang/Llama3-8B-Chinese-Chat
- sources:
  - layer_range: [6, 24]
    model: hfl/llama-3-chinese-8b-instruct-v2
- sources:
  - layer_range: [8, 32]
    model: NousResearch/Hermes-2-Pro-Llama-3-8B
merge_method: passthrough
dtype: float16
Downloads last month
6
Safetensors
Model size
13.7B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for mergekit-community/Llama3-12B-wwe