Casual-Autopsy
commited on
Commit
•
b32fd1a
1
Parent(s):
e073c43
Update README.md
Browse files
README.md
CHANGED
@@ -1,49 +1,51 @@
|
|
1 |
---
|
2 |
-
base_model: []
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- mergekit
|
6 |
- merge
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
-
#
|
10 |
|
11 |
-
This is a
|
12 |
|
13 |
## Merge Details
|
14 |
### Merge Method
|
15 |
|
16 |
-
This model was merged using the della merge method using /
|
17 |
|
18 |
### Models Merged
|
19 |
|
20 |
The following models were included in the merge:
|
21 |
-
* /
|
22 |
-
* /
|
23 |
-
* /
|
24 |
|
25 |
### Configuration
|
26 |
|
27 |
The following YAML configuration was used to produce this model:
|
28 |
|
29 |
```yaml
|
30 |
-
|
31 |
models:
|
32 |
-
- model: /
|
33 |
parameters:
|
34 |
weight: 1.0
|
35 |
-
- model: /
|
36 |
parameters:
|
37 |
weight: 1.0
|
38 |
-
- model: /
|
39 |
parameters:
|
40 |
weight: 1.0
|
41 |
-
- model: /
|
42 |
parameters:
|
43 |
weight: 1.0
|
44 |
-
base_model: /
|
45 |
merge_method: della
|
46 |
dtype: float32
|
47 |
out_dtype: bfloat16
|
48 |
-
|
49 |
```
|
|
|
1 |
---
|
|
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
- mergekit
|
5 |
- merge
|
6 |
+
base_model:
|
7 |
+
- bluuwhale/L3-SAO-MIX-8B-V1
|
8 |
+
- Sao10K/L3-8B-Niitama-v1
|
9 |
+
- Sao10K/L3-8B-Lunaris-v1
|
10 |
+
- Sao10K/L3-8B-Tamamo-v1
|
11 |
+
- Sao10K/L3-8B-Stheno-v3.2
|
12 |
---
|
13 |
+
# L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc
|
14 |
|
15 |
+
This is a remerge of [bluuwhale's merger](https://huggingface.co/bluuwhale/L3-SAO-MIX-8B-V1) using the exact yaml config with the only difference being that merge calculations are done in fp32 instead of bfp16
|
16 |
|
17 |
## Merge Details
|
18 |
### Merge Method
|
19 |
|
20 |
+
This model was merged using the della merge method using [Sao10K/L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1) as a base.
|
21 |
|
22 |
### Models Merged
|
23 |
|
24 |
The following models were included in the merge:
|
25 |
+
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
|
26 |
+
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
27 |
+
* [Sao10K/L3-8B-Tamamo-v1](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1)
|
28 |
|
29 |
### Configuration
|
30 |
|
31 |
The following YAML configuration was used to produce this model:
|
32 |
|
33 |
```yaml
|
|
|
34 |
models:
|
35 |
+
- model: Sao10K/L3-8B-Lunaris-v1
|
36 |
parameters:
|
37 |
weight: 1.0
|
38 |
+
- model: Sao10K/L3-8B-Stheno-v3.2
|
39 |
parameters:
|
40 |
weight: 1.0
|
41 |
+
- model: Sao10K/L3-8B-Niitama-v1
|
42 |
parameters:
|
43 |
weight: 1.0
|
44 |
+
- model: Sao10K/L3-8B-Tamamo-v1
|
45 |
parameters:
|
46 |
weight: 1.0
|
47 |
+
base_model: Sao10K/L3-8B-Niitama-v1
|
48 |
merge_method: della
|
49 |
dtype: float32
|
50 |
out_dtype: bfloat16
|
|
|
51 |
```
|