grimjim commited on
Commit
12d216c
1 Parent(s): aa99ee7

Initial release

Browse files
.gitattributes CHANGED
@@ -4,6 +4,7 @@
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
 
7
  *.gz filter=lfs diff=lfs merge=lfs -text
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
 
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gguf filter=lfs diff=lfs merge=lfs -text
8
  *.gz filter=lfs diff=lfs merge=lfs -text
9
  *.h5 filter=lfs diff=lfs merge=lfs -text
10
  *.joblib filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,58 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - grimjim/Mistral-Starling-merge-trial1-7B
4
+ - grimjim/kukulemon-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: cc-by-nc-4.0
10
+ pipeline_tag: text-generation
11
+ ---
12
+ # cuckoo-starling-32k-7B-GGUF
13
+
14
+ For this merged model, rope theta was in config.json was manually adjusted down to 100K, a value less than 1M as initially released by Mistral for v0.2, but higher than the 10K that accompanied practical 8K context for v0.1. We idly conjecture that 1M rope theta might improve performance for needle-in-a-haystack queries; however, during informal testing, narrative coherence seemed to occasionally suffer under 1M rope theta. Furthermore, the results reported in the arXiv paper [Scaling Laws of RoPE-based Extrapolation](https://arxiv.org/abs/2310.05209) suggest that 1M rope theta may be overkill for a 32K token context window.
15
+
16
+ Lightly tested with temperature 0.9-1.0 and minP 0.02, using ChatML prompts. The model natively supports Alpaca prompts.
17
+
18
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
+
20
+ Full weights: [grimjim/cuckoo-starling-32k-7B](https://huggingface.co/grimjim/cuckoo-starling-32k-7B/)
21
+ GGUFs: [grimjim/cuckoo-starling-32k-7B-GGUF](https://huggingface.co/grimjim/cuckoo-starling-32k-7B-GGUF/)
22
+
23
+ ## Merge Details
24
+ ### Merge Method
25
+
26
+ This model was merged using the SLERP merge method.
27
+
28
+ ### Models Merged
29
+
30
+ The following models were included in the merge:
31
+ * [grimjim/Mistral-Starling-merge-trial1-7B](https://huggingface.co/grimjim/Mistral-Starling-merge-trial1-7B)
32
+ * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
33
+
34
+ ### Configuration
35
+
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ ```yaml
39
+ slices:
40
+ - sources:
41
+ - model: grimjim/Mistral-Starling-merge-trial1-7B
42
+ layer_range: [0, 32]
43
+ - model: grimjim/kukulemon-7B
44
+ layer_range: [0, 32]
45
+ # or, the equivalent models: syntax:
46
+ # models:
47
+ merge_method: slerp
48
+ base_model: grimjim/Mistral-Starling-merge-trial1-7B
49
+ parameters:
50
+ t:
51
+ - filter: self_attn
52
+ value: [0, 0.5, 0.3, 0.7, 1]
53
+ - filter: mlp
54
+ value: [1, 0.5, 0.7, 0.3, 0]
55
+ - value: 0.5 # fallback for rest of tensors
56
+ dtype: bfloat16
57
+
58
+ ```
cuckoo-starling-32k-7B.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c372db5e051634c337d2390cf678156d27a9d297d3fffc6405df4f876cf2844f
3
+ size 4368439584
cuckoo-starling-32k-7B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a01e4309d0802d01024bc16799fbec75610be92035e44d178b8022a4a181e8e2
3
+ size 5131409696
cuckoo-starling-32k-7B.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5a6a05e5235b2dc0eaa75ce13cbcc49f85148586942b13079839e516cc4672f
3
+ size 5942065440
cuckoo-starling-32k-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1ea4a838e6284b04fcd728c05b08579c850107ec7d59c78d30b02d513dc84d3
3
+ size 7695857952