File size: 2,318 Bytes
94d08d1
 
5bcefae
94d08d1
5bcefae
94d08d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
base_model:
- grimjim/llama-3-Nephilim-v3-8B
library_name: transformers
quantized_by: grimjim
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# llama-3-Nephilim-v3-8B-GGUF

This repo contains select GGUF quants of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

Full weights are [here](https://huggingface.co./grimjim/llama-3-Nephilim-v3-8B).

Although none of the components of this merge were trained for roleplay nor intended for it, the model can be used effectively in that role.

Tested with temperature 1 and minP 0.01. This model leans toward being creative, so adjust temperature upward or downward as desired.

There are initial format consistency issues with the merged model, but this can be mitigated in an Instruct prompt. Additionally, promptsteering was employed to vary the text generation output to avoid some of the common failings observed during text generation with Llama 3 8B models. The complete Instruct prompt used during testing is available below.

- [context template](https://huggingface.co./debased-ai/SillyTavern-settings/blob/main/advanced_formatting/context_template/Llama%203%20Instruct%20Immersed2.json)
- [instruct prompt](https://huggingface.co./debased-ai/SillyTavern-settings/blob/main/advanced_formatting/instruct_mode/Llama%203%20Instruct%20Immersed2.json)

Built with Meta Llama 3.

## Merge Details
### Merge Method

This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge](https://huggingface.co./grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge) as a base.

### Models Merged

The following models were included in the merge:
* [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co./tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
base_model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 32]
    model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge
  - layer_range: [0, 32]
    model: tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
    parameters:
      weight: 0.1

```