File size: 1,596 Bytes
e481d5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Warit2/GemOmniscien
- google/gemma-2b-it
---

# GemOmniscien-ties

GemOmniscien-ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Warit2/GemOmniscien](https://huggingface.co./Warit2/GemOmniscien)
* [google/gemma-2b-it](https://huggingface.co./google/gemma-2b-it)

## 🧩 Configuration

\```yaml
models:
  - model: Warit2/GemOmniscien
    parameters:
      density: 0.5
      weight: 0.5
  - model: google/gemma-2b-it
    parameters:
      density: 0.5
      weight: 0.5 # weight gradient
merge_method: ties
base_model: Warit2/GemOmniscien
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16


# models:
#   - model: unsloth/gemma-7b-bnb-4bit
#     layer_range: [0, 32]
#     # no parameters necessary for base model
#   - model: mistralai/Mistral-7B-v0.1
#     layer_range: [24, 32]
# merge_method: passthrough
# # base_model: unsloth/gemma-7b-bnb-4bit
# parameters:
#   normalize: true
#   int8_mask: true
# dtype: float16
# slices:
#   - sources:
#     - model: unsloth/gemma-2b-bnb-4bit
#       layer_range: [0, 16]
#   - sources:
#     - model: NousResearch/Nous-Hermes-llama-2-7b
#       layer_range: [0, 22]
# merge_method: passthrough
# dtype: bfloat16
# models:
#   - model: unsloth/gemma-2b-bnb-4bit
#     parameters:
#       density: 0.53
#       weight: 0.45
#   - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
#     parameters:
#       weight: 0.5
# merge_method: ties
# base_model: unsloth/gemma-2b-bnb-4bit
# parameters:
#   int8_mask: true
# dtype: bfloat16
\```