File size: 3,782 Bytes
402a8bb 7752867 75fc5d1 6d7d081 402a8bb 066d5fc 402a8bb c763333 402a8bb 6485e89 066d5fc 6485e89 066d5fc 6485e89 402a8bb 6485e89 402a8bb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama2
language:
- ja
- en
inference: false
datasets:
- databricks/databricks-dolly-15k
- kunishou/databricks-dolly-15k-ja
- izumi-lab/llm-japanese-dataset
tags:
- gptq
base_model: rinna/youri-7b-instruction
base_model_relation: quantized
---
# `rinna/youri-7b-instruction-gptq`
![rinna-icon](./rinna.png)
# Overview
`rinna/youri-7b-instruction-gptq` is the quantized model for [`rinna/youri-7b-instruction`](https://huggingface.co./rinna/youri-7b-instruction) using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.
* **Model architecture**
Refer to the [original model](https://huggingface.co./rinna/youri-7b-instruction) for architecture details.
* **Fine-tuning**
Refer to the [original model](https://huggingface.co./rinna/youri-7b-instruction) for fine-tuning details.
* **Contributors**
- [Toshiaki Wakatsuki](https://huggingface.co./t-w)
- [Tianyu Zhao](https://huggingface.co./tianyuz)
- [Kei Sawada](https://huggingface.co./keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-instruction-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-instruction-gptq", use_safetensors=True)
instruction = "次の日本語を英語に翻訳してください。"
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 入力:
{input}
### 応答:
"""
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids=token_ids.to(model.device),
max_new_tokens=200,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~
---
# Tokenization
The model uses the original llama-2 tokenizer.
---
# How to cite
```bibtex
@misc{rinna-youri-7b-instruction-gptq,
title = {rinna/youri-7b-instruction-gptq},
author = {Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei},
url = {https://huggingface.co./rinna/youri-7b-instruction-gptq}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}
```
---
# License
[The llama2 license](https://ai.meta.com/llama/license/) |