nbeerbower commited on
Commit
9ead3bf
1 Parent(s): 6f167bb

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mahou-1.5-mistral-nemo-12b-lorablated.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ mahou-1.5-mistral-nemo-12b-lorablated.bf16.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - flammenai/Mahou-1.5-mistral-nemo-12B
4
+ - nbeerbower/Mistral-Nemo-12B-abliterated-LORA
5
+ library_name: transformers
6
+ license: apache-2.0
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+ - autoquant
11
+ - gguf
12
+ ---
13
+ # Mahou-1.5-mistral-nemo-12B-lorablated
14
+
15
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
+
17
+ ## Merge Details
18
+ ### Merge Method
19
+
20
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [flammenai/Mahou-1.5-mistral-nemo-12B](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) + [nbeerbower/Mistral-Nemo-12B-abliterated-LORA](https://huggingface.co/nbeerbower/Mistral-Nemo-12B-abliterated-LORA) as a base.
21
+
22
+ ### Models Merged
23
+
24
+ The following models were included in the merge:
25
+
26
+
27
+ ### Configuration
28
+
29
+ The following YAML configuration was used to produce this model:
30
+
31
+ ```yaml
32
+ base_model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
33
+ dtype: bfloat16
34
+ merge_method: task_arithmetic
35
+ parameters:
36
+ normalize: false
37
+ slices:
38
+ - sources:
39
+ - layer_range: [0, 40]
40
+ model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
41
+ parameters:
42
+ weight: 1.0
43
+
44
+ ```
mahou-1.5-mistral-nemo-12b-lorablated.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bc67129d8a70c086b606d43c481136099fe675ffd31488a42502c6ac0d775cc
3
+ size 4791047648
mahou-1.5-mistral-nemo-12b-lorablated.bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7d7e12741a2728cccaa758585f139e7d1bd553ed82d1b106dc6e1a969af73cb
3
+ size 24504276448