BigWeave v25 95b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Chatml, Mistral, Vicuna.

Merge process

This is a self-merge of 152334H/miqu-1-70b-sf. The first 30 layers are duplicated in groups of 10 layers. According to exl2 measurements, these are among the least important layers.

Merge configuration:

slices:
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [0,6]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [1,11]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [6,16]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [11,21]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [16,26]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [21,31]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [26,80]
merge_method: passthrough
dtype: float16
Downloads last month
81
Safetensors
Model size
94.6B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llmixer/BigWeave-v25-95b

Finetuned
(25)
this model
Quantizations
2 models