Nyxene-v3-11B / README.md
beberik's picture
Upload README.md
fb21218
|
raw
history blame
1.89 kB
metadata
license: cc-by-nc-4.0

Description

This repo contains bf16 files of Nyxene-v1-11B. Just new version with some new things.

Model used

Prompt template

Just use chatml.

The secret sauce

go-bruins-loyal-piano-11B :

slices:
  - sources:
    - model: rwitz/go-bruins-v2
      layer_range: [0, 24]
  - sources:
    - model: chargoddard/loyal-piano-m7-cdpo
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

neural-marcoroni-11B :

slices:
  - sources:
    - model: AIDC-ai-business/Marcoroni-7B-v3
      layer_range: [0, 24]
  - sources:
    - model: Intel/neural-chat-7b-v3-3-Slerp
      layer_range: [8, 32]

merge_method: passthrough
dtype: bfloat16

Nyxene-11B :

slices:
  - sources:
      - model: "./go-bruins-loyal-piano-11B"
        layer_range: [0, 48]
      - model: "./neural-marcoroni-11B"
        layer_range: [0, 48]
merge_method: slerp
base_model: "./go-bruins-loyal-piano-11B"
parameters:
  t:
    - filter: lm_head 
      value: [0.5]
    - filter: embed_tokens
      value: [0.75]
    - filter: self_attn
      value: [0.75, 0.25]
    - filter: mlp
      value:  [0.25, 0.75]
    - filter: layernorm
      value: [0.5, 0.5]
    - filter: modelnorm
      value: [0.5]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16

I use mergekit for all the manipulation told here.

Thanks to the Undi95 for the original 11B mistral merge recipe.