File size: 3,226 Bytes
f561314
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117

---

base_model:
- meta-llama/Meta-Llama-3.1-8B-Instruct
- elyza/Llama-3-ELYZA-JP-8B
- nvidia/Llama3-ChatQA-1.5-8B
library_name: transformers
tags:
- mergekit
- merge
language:
- ja
license: llama3

---

![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)

# QuantFactory/Llama3.1-ArrowSE-v0.4-GGUF
This is quantized version of [DataPilot/Llama3.1-ArrowSE-v0.4](https://huggingface.co./DataPilot/Llama3.1-ArrowSE-v0.4) created using llama.cpp

# Original Model Card


## 概要 

このモデルはllama3.1-8B-instructをもとに日本語性能を高めることを目的にMergekit&ファインチューニングを用いて作成されました。

meta,ELYZA,nvidiaの皆様に感謝します。

## how to use


```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。"
text = "Vtuberとして成功するために大切な5つのことを小学生にでもわかるように教えてください。"

model_name = "DataPilot/Llama3.1-ArrowSE-v0.4"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
)
model.eval()

messages = [
    {"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
    {"role": "user", "content": text},
]
prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
token_ids = tokenizer.encode(
    prompt, add_special_tokens=False, return_tensors="pt"
)

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        max_new_tokens=1200,
        do_sample=True,
        temperature=0.6,
        top_p=0.9,
    )
output = tokenizer.decode(
    output_ids.tolist()[0][token_ids.size(1):], skip_special_tokens=True
)
print(output)
```


## merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using meta-llama/Meta-Llama-3.1-8B-Instruct as a base.

### Models Merged

The following models were included in the merge:
* [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3.1-8B-Instruct)
* [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co./elyza/Llama-3-ELYZA-JP-8B)
* [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co./nvidia/Llama3-ChatQA-1.5-8B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: meta-llama/Meta-Llama-3.1-8B-Instruct
    parameters:
      weight: 1
  - model: elyza/Llama-3-ELYZA-JP-8B
    parameters:
      weight: 0.7
  - model: nvidia/Llama3-ChatQA-1.5-8B
    parameters:
      weight: 0.15
merge_method: ties
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
parameters:
  normalize: false
dtype: bfloat16
```