shahzebnaveed commited on
Commit
5377150
1 Parent(s): 38757cf

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +23 -10
README.md CHANGED
@@ -1,22 +1,34 @@
1
  ---
2
- license: apache-2.0
 
 
 
3
  tags:
4
- - merge
5
  - mergekit
6
- - lazymergekit
7
- - berkeley-nest/Starling-LM-7B-alpha
8
- - shahzebnaveed/NeuralHermes-2.5-Mistral-7B
9
  ---
 
10
 
11
- # StarlingHermes-2.5-Mistral-7B-slerp
12
 
13
- StarlingHermes-2.5-Mistral-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
14
- * [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
 
 
 
 
 
 
15
  * [shahzebnaveed/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/shahzebnaveed/NeuralHermes-2.5-Mistral-7B)
 
16
 
17
- ## 🧩 Configuration
 
 
18
 
19
  ```yaml
 
20
  slices:
21
  - sources:
22
  - model: berkeley-nest/Starling-LM-7B-alpha
@@ -34,4 +46,5 @@ parameters:
34
  - value: 0.5
35
  dtype: bfloat16
36
 
37
- ```
 
 
1
  ---
2
+ base_model:
3
+ - shahzebnaveed/NeuralHermes-2.5-Mistral-7B
4
+ - berkeley-nest/Starling-LM-7B-alpha
5
+ library_name: transformers
6
  tags:
 
7
  - mergekit
8
+ - merge
9
+
 
10
  ---
11
+ # merge
12
 
13
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
 
15
+ ## Merge Details
16
+ ### Merge Method
17
+
18
+ This model was merged using the SLERP merge method.
19
+
20
+ ### Models Merged
21
+
22
+ The following models were included in the merge:
23
  * [shahzebnaveed/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/shahzebnaveed/NeuralHermes-2.5-Mistral-7B)
24
+ * [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
25
 
26
+ ### Configuration
27
+
28
+ The following YAML configuration was used to produce this model:
29
 
30
  ```yaml
31
+
32
  slices:
33
  - sources:
34
  - model: berkeley-nest/Starling-LM-7B-alpha
 
46
  - value: 0.5
47
  dtype: bfloat16
48
 
49
+
50
+ ```