Knobi3 commited on
Commit
51b760c
·
verified ·
1 Parent(s): 05c9f6b

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +29 -21
README.md CHANGED
@@ -3,38 +3,46 @@ tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- - OpenPipe/mistral-ft-optimized-1218
7
- - mlabonne/NeuralHermes-2.5-Mistral-7B
 
8
  base_model:
9
- - OpenPipe/mistral-ft-optimized-1218
10
- - mlabonne/NeuralHermes-2.5-Mistral-7B
 
11
  ---
12
 
13
  # NeuralPipe-7B-slerp
14
 
15
  NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
17
- * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
 
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
- slices:
23
- - sources:
24
- - model: OpenPipe/mistral-ft-optimized-1218
25
- layer_range: [0, 32]
26
- - model: mlabonne/NeuralHermes-2.5-Mistral-7B
27
- layer_range: [0, 32]
28
- merge_method: slerp
29
- base_model: OpenPipe/mistral-ft-optimized-1218
 
 
 
 
 
 
 
 
 
30
  parameters:
31
- t:
32
- - filter: self_attn
33
- value: [0, 0.5, 0.3, 0.7, 1]
34
- - filter: mlp
35
- value: [1, 0.5, 0.7, 0.3, 0]
36
- - value: 0.5
37
  dtype: bfloat16
 
38
  ```
39
 
40
  ## 💻 Usage
@@ -46,7 +54,7 @@ from transformers import AutoTokenizer
46
  import transformers
47
  import torch
48
 
49
- model = "Knobi3/NeuralPipe-7B-slerp"
50
  messages = [{"role": "user", "content": "What is a large language model?"}]
51
 
52
  tokenizer = AutoTokenizer.from_pretrained(model)
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
+ - AI-Sweden-Models/tyr
7
+ - mlabonne/NeuralBeagle14-7B
8
+ - neph1/bellman-7b-mistral-instruct-v0.2
9
  base_model:
10
+ - AI-Sweden-Models/tyr
11
+ - mlabonne/NeuralBeagle14-7B
12
+ - neph1/bellman-7b-mistral-instruct-v0.2
13
  ---
14
 
15
  # NeuralPipe-7B-slerp
16
 
17
  NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
18
+ * [AI-Sweden-Models/tyr](https://huggingface.co/AI-Sweden-Models/tyr)
19
+ * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
20
+ * [neph1/bellman-7b-mistral-instruct-v0.2](https://huggingface.co/neph1/bellman-7b-mistral-instruct-v0.2)
21
 
22
  ## 🧩 Configuration
23
 
24
  ```yaml
25
+ models:
26
+ - model: Nexusflow/Starling-LM-7B-beta
27
+ # No parameters necessary for base model
28
+ - model: AI-Sweden-Models/tyr
29
+ parameters:
30
+ density: 0.53
31
+ weight: 0.4
32
+ - model: mlabonne/NeuralBeagle14-7B
33
+ parameters:
34
+ density: 0.53
35
+ weight: 0.3
36
+ - model: neph1/bellman-7b-mistral-instruct-v0.2
37
+ parameters:
38
+ density: 0.53
39
+ weight: 0.3
40
+ merge_method: dare_ties
41
+ base_model: Nexusflow/Starling-LM-7B-beta
42
  parameters:
43
+ int8_mask: true
 
 
 
 
 
44
  dtype: bfloat16
45
+
46
  ```
47
 
48
  ## 💻 Usage
 
54
  import transformers
55
  import torch
56
 
57
+ model = "knobi3/NeuralPipe-7B-slerp"
58
  messages = [{"role": "user", "content": "What is a large language model?"}]
59
 
60
  tokenizer = AutoTokenizer.from_pretrained(model)