Text Generation
Transformers
Japanese
English
llama
gptq
File size: 4,072 Bytes
b93d490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bfa5e9
b93d490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama2
language:
- ja
- en
inference: false
datasets:
- databricks/databricks-dolly-15k
- kunishou/databricks-dolly-15k-ja
- izumi-lab/llm-japanese-dataset
---

# `rinna/youri-7b-chat-gptq`

![rinna-icon](./rinna.png)

# Overview
`rinna/youri-7b-chat-gptq` is the quantized model for [`rinna/youri-7b-chat`](https://huggingface.co./rinna/youri-7b-chat) using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.

* **Model architecture**

    Refer to the [original model](https://huggingface.co./rinna/youri-7b-chat) for architecture details.

* **Fine-tuning**
    
    Refer to the [original model](https://huggingface.co./rinna/youri-7b-chat) for fine-tuning details.

* **Authors**
    
    - [Toshiaki Wakatsuki](https://huggingface.co./t-w)
    - [Tianyu Zhao](https://huggingface.co./tianyuz)
    - [Kei Sawada](https://huggingface.co./keisawada)

---

# Benchmarking

Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).

---

# How to use the model

~~~~python
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-chat-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-chat-gptq", use_safetensors=True)

instruction = "次の日本語を英語に翻訳してください。"
input = "自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。"

context = [
    {
        "speaker": "設定",
        "text": instruction
    },
    {
        "speaker": "ユーザー",
        "text": input
    }
]
prompt = [
    f"{uttr['speaker']}: {uttr['text']}"
    for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
    prompt
    + "\n"
    + "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        input_ids=token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)

output = output[len(prompt):-len("</s>")].strip()
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"

context.extend([
    {
        "speaker": "システム",
        "text": output
    },
    {
        "speaker": "ユーザー",
        "text": input
    }
])
prompt = [
    f"{uttr['speaker']}: {uttr['text']}"
    for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
    prompt
    + "\n"
    + "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        input_ids=token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~

---

# Tokenization
The model uses the original llama-2 tokenizer.

---

# How to cite
~~~
@misc{RinnaYouri7bChatGPTQ, 
    url={https://huggingface.co./rinna/youri-7b-chat-gptq}, 
    title={rinna/youri-7b-chat-gptq}, 
    author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
}
~~~
---

# License
[The llama2 license](https://ai.meta.com/llama/license/)