Text Generation
Transformers
PyTorch
Safetensors
Japanese
English
llama
Eval Results
text-generation-inference
File size: 6,179 Bytes
96d1690
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a294b6
96d1690
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama2
language:
- ja
- en
inference: false
datasets:
- databricks/databricks-dolly-15k
- kunishou/databricks-dolly-15k-ja
- izumi-lab/llm-japanese-dataset
---

# `rinna/youri-7b-chat`

![rinna-icon](./rinna.png)

# Overview
The model is the instruction-tuned version of [`rinna/youri-7b`](https://huggingface.co./rinna/youri-7b). It adopts a chat-style input format.

* **Model architecture**

    A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [llama2 paper](https://arxiv.org/abs/2307.09288) for architecture details.

* **Fine-tuning**
    
    The fine-tuning data is the subset of the following datasets.
    * [Databricks Dolly data](https://huggingface.co./datasets/databricks/databricks-dolly-15k)
    * [Japanese Databricks Dolly data](https://huggingface.co./datasets/kunishou/databricks-dolly-15k-ja)
    * [Anthropic HH RLHF data](https://huggingface.co./datasets/Anthropic/hh-rlhf) and its Japanese translation
    * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation
    * [Izumi lab LLM Japanese dataset](https://github.com/masanorihirano/llm-japanese-dataset/tree/main)
      * The following sections are used
        * alt
        * aozora-txt
        * CourseraParallel
        * ParaNatCom
        * Tab-delimited_Bilingual_Sentence_Pairs
        * tanaka-corpus
        * wikinews
        * wordnet
        * yasashi-japanese
      * The [remaining sections](https://github.com/masanorihirano/llm-japanese-dataset/tree/main/datasets-cc-by-sa) contain commonly used evaluation corpora so they are skipped to prevent data leak.

* **Authors**
    
    - [Tianyu Zhao](https://huggingface.co./tianyuz)
    - [Kei Sawada](https://huggingface.co./keisawada)

---

# Benchmarking

Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).

---

# How to use the model

~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-chat")
model = AutoModelForCausalLM.from_pretrained("rinna/youri-7b-chat")

if torch.cuda.is_available():
    model = model.to("cuda")

instruction = "次の日本語を英語に翻訳してください。"
input = "自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。"

context = [
    {
        "speaker": "設定",
        "text": instruction
    },
    {
        "speaker": "ユーザー",
        "text": input
    }
]
prompt = [
    f"{uttr['speaker']}: {uttr['text']}"
    for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
    prompt
    + "\n"
    + "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
設定: 次の日本語を英語に翻訳してください。
ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。
システム:  Learning to solve tasks based on natural language instructions is called instruction tuning.</s>
"""

output = output[len(prompt):-len("</s>")].strip()
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"

context.extend([
    {
        "speaker": "システム",
        "text": output
    },
    {
        "speaker": "ユーザー",
        "text": input
    }
])
prompt = [
    f"{uttr['speaker']}: {uttr['text']}"
    for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
    prompt
    + "\n"
    + "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
設定: 次の日本語を英語に翻訳してください。
ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。
システム: Learning to solve tasks based on natural language instructions is called instruction tuning.
ユーザー: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテ キストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。
システム:  Large language models (LLMs) are computer language models consisting of a deep artificial neural network with millions to billions of parameters that are trained by self-supervised learning or semi-supervised learning using vast unlabeled text corpora.</s>
"""
~~~~

---

# Tokenization
The model uses the original llama-2 tokenizer.

---

# How to cite
~~~
@misc{RinnaYouri7bChat, 
    url={https://huggingface.co./rinna/youri-7b-chat}, 
    title={rinna/youri-7b-chat}, 
    author={Zhao, Tianyu and Sawada, Kei}
}
~~~
---

# License
[The llama2 license](https://ai.meta.com/llama/license/)