DavidAU's picture
Update README.md
40e141f verified
|
raw
history blame
1.75 kB
metadata
base_model: []
library_name: transformers
tags:
  - mergekit
  - merge

L3-Jamet-8B-MK.V-Blackroot-Instruct-18.5B

For GGufs and full model card please go to :

[ https://huggingface.co./DavidAU/L3-Jamet-8B-MK.V-Blackroot-12.2B-V1-INSTRUCT-ULTRA-F32-GGUF ]

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • G:/7B/Jamet-8B-L3-MK.V-Blackroot
  • G:/7B/Meta-Llama-3-8B-Instruct

Configuration

The following YAML configuration was used to produce this model:

slices:
 - sources:
   - model: G:/7B/Meta-Llama-3-8B-Instruct
     layer_range: [0, 12]
 - sources:
   - model: G:/7B/Jamet-8B-L3-MK.V-Blackroot
     layer_range: [6, 19]
     parameters:
       scale:
         - filter: o_proj
           value: 1
         - filter: down_proj
           value: 1
         - value: 1
 - sources:
   - model: G:/7B/Meta-Llama-3-8B-Instruct
     layer_range: [12, 18]
     parameters:
       scale:
         - filter: o_proj
           value: .5
         - filter: down_proj
           value: .5
         - value: 1
 - sources:
   - model: G:/7B/Meta-Llama-3-8B-Instruct
     layer_range: [18, 25]
     parameters:
       scale:
         - filter: o_proj
           value: .75
         - filter: down_proj
           value: .75
         - value: 1
 - sources:
   - model: G:/7B/Jamet-8B-L3-MK.V-Blackroot
     layer_range: [19, 32]
     parameters:
       scale:
         - filter: o_proj
           value: 1
         - filter: down_proj
           value: 1
         - value: 1
merge_method: passthrough
dtype: float32