jspr commited on
Commit
4b27fc1
1 Parent(s): e50bcfb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -9,21 +9,19 @@ tags:
9
  ---
10
  # miqurelian-120b
11
 
12
- This is v2.0 of a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16) using [mergekit](https://github.com/cg123/mergekit).
13
 
14
  ## Model Details
15
 
16
  - Max Context: 32768 tokens
17
  - Layers: 140
18
 
19
- ### Prompt template: Mistral
20
 
21
  ```
22
  <s>[INST] {prompt} [/INST]
23
  ```
24
 
25
- ## Merge Details
26
-
27
  ### Merge Method
28
 
29
  This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
 
9
  ---
10
  # miqurelian-120b
11
 
12
+ This is a 120b merge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16), a creative writing model, using [mergekit](https://github.com/cg123/mergekit). It performs approximtely SOTA for long-context creative writing tasks that require strong semantic coherence.
13
 
14
  ## Model Details
15
 
16
  - Max Context: 32768 tokens
17
  - Layers: 140
18
 
19
+ ### Prompt template
20
 
21
  ```
22
  <s>[INST] {prompt} [/INST]
23
  ```
24
 
 
 
25
  ### Merge Method
26
 
27
  This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.