grimjim commited on
Commit
94d08d1
1 Parent(s): 9d21e09

Update README.md

Browse files

Added link to full weights

Files changed (1) hide show
  1. README.md +58 -56
README.md CHANGED
@@ -1,56 +1,58 @@
1
- ---
2
- base_model:
3
- - grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
4
- - tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
- license: cc-by-nc-4.0
10
- pipeline_tag: text-generation
11
- ---
12
- # llama-3-Nephilim-v3-8B-GGUF
13
-
14
- This repo contains select GGUF quants of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
-
16
- Although none of the components of this merge were trained for roleplay nor intended for it, the model can be used effectively in that role.
17
-
18
- Tested with temperature 1 and minP 0.01. This model leans toward being creative, so adjust temperature upward or downward as desired.
19
-
20
- There are initial format consistency issues with the merged model, but this can be mitigated in an Instruct prompt. Additionally, promptsteering was employed to vary the text generation output to avoid some of the common failings observed during text generation with Llama 3 8B models. The complete Instruct prompt used during testing is available below.
21
-
22
- - [context template](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/context_template/Llama%203%20Instruct%20Immersed2.json)
23
- - [instruct prompt](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/instruct_mode/Llama%203%20Instruct%20Immersed2.json)
24
-
25
- Built with Meta Llama 3.
26
-
27
- ## Merge Details
28
- ### Merge Method
29
-
30
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge](https://huggingface.co/grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge) as a base.
31
-
32
- ### Models Merged
33
-
34
- The following models were included in the merge:
35
- * [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1)
36
-
37
- ### Configuration
38
-
39
- The following YAML configuration was used to produce this model:
40
-
41
- ```yaml
42
- base_model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
43
- dtype: bfloat16
44
- merge_method: task_arithmetic
45
- parameters:
46
- normalize: false
47
- slices:
48
- - sources:
49
- - layer_range: [0, 32]
50
- model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
51
- - layer_range: [0, 32]
52
- model: tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
53
- parameters:
54
- weight: 0.1
55
-
56
- ```
 
 
 
1
+ ---
2
+ base_model:
3
+ - grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
4
+ - tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: cc-by-nc-4.0
10
+ pipeline_tag: text-generation
11
+ ---
12
+ # llama-3-Nephilim-v3-8B-GGUF
13
+
14
+ This repo contains select GGUF quants of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ Full weights are [here](https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B).
17
+
18
+ Although none of the components of this merge were trained for roleplay nor intended for it, the model can be used effectively in that role.
19
+
20
+ Tested with temperature 1 and minP 0.01. This model leans toward being creative, so adjust temperature upward or downward as desired.
21
+
22
+ There are initial format consistency issues with the merged model, but this can be mitigated in an Instruct prompt. Additionally, promptsteering was employed to vary the text generation output to avoid some of the common failings observed during text generation with Llama 3 8B models. The complete Instruct prompt used during testing is available below.
23
+
24
+ - [context template](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/context_template/Llama%203%20Instruct%20Immersed2.json)
25
+ - [instruct prompt](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/instruct_mode/Llama%203%20Instruct%20Immersed2.json)
26
+
27
+ Built with Meta Llama 3.
28
+
29
+ ## Merge Details
30
+ ### Merge Method
31
+
32
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge](https://huggingface.co/grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge) as a base.
33
+
34
+ ### Models Merged
35
+
36
+ The following models were included in the merge:
37
+ * [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1)
38
+
39
+ ### Configuration
40
+
41
+ The following YAML configuration was used to produce this model:
42
+
43
+ ```yaml
44
+ base_model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
45
+ dtype: bfloat16
46
+ merge_method: task_arithmetic
47
+ parameters:
48
+ normalize: false
49
+ slices:
50
+ - sources:
51
+ - layer_range: [0, 32]
52
+ model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
53
+ - layer_range: [0, 32]
54
+ model: tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
55
+ parameters:
56
+ weight: 0.1
57
+
58
+ ```