File size: 1,342 Bytes
ac9dd2d 2491de4 ac9dd2d 3c0a2e3 cea99a4 ac9dd2d 21f11d5 cea99a4 ac9dd2d 2491de4 ac9dd2d 2491de4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
base_model:
- Nitral-Archive/nightwing3-r64-2-latest_test-train-10B
- Nitral-Archive/nightwing3-r64-1-latest_test-train-10B
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
# Noticed some weird behavior in 4bpw exl2, not sure if this is contained or a model related issue. However after seeing some recent bugfixes regarding the targetting of lm training heads among a few other things, i will be attemping to retrain this for comparitive sake.
# Base model: (Falcon3-10B)

# Prompt format: ChatML
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### The following YAML configuration was used to produce this model: (SLERP merge method)
```yaml
slices:
- sources:
- model: Nitral-Archive/nightwing3-r64-1-latest_test-train-10B
layer_range: [0, 40]
- model: Nitral-Archive/nightwing3-r64-2-latest_test-train-10B
layer_range: [0, 40]
merge_method: slerp
base_model: Nitral-Archive/nightwing3-r64-1-latest_test-train-10B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.420
dtype: bfloat16
``` |