--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: llama2 language: - ja - en inference: false datasets: - databricks/databricks-dolly-15k - kunishou/databricks-dolly-15k-ja - izumi-lab/llm-japanese-dataset tags: - gptq base_model: rinna/youri-7b-instruction base_model_relation: quantized --- # `rinna/youri-7b-instruction-gptq` ![rinna-icon](./rinna.png) # Overview `rinna/youri-7b-instruction-gptq` is the quantized model for [`rinna/youri-7b-instruction`](https://huggingface.co./rinna/youri-7b-instruction) using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference. * **Model architecture** Refer to the [original model](https://huggingface.co./rinna/youri-7b-instruction) for architecture details. * **Fine-tuning** Refer to the [original model](https://huggingface.co./rinna/youri-7b-instruction) for fine-tuning details. * **Contributors** - [Toshiaki Wakatsuki](https://huggingface.co./t-w) - [Tianyu Zhao](https://huggingface.co./tianyuz) - [Kei Sawada](https://huggingface.co./keisawada) --- # Benchmarking Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html). --- # How to use the model ~~~~python import torch from transformers import AutoTokenizer from auto_gptq import AutoGPTQForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-instruction-gptq") model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-instruction-gptq", use_safetensors=True) instruction = "次の日本語を英語に翻訳してください。" input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。" prompt = f""" 以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。 ### 指示: {instruction} ### 入力: {input} ### 応答: """ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( input_ids=token_ids.to(model.device), max_new_tokens=200, do_sample=True, temperature=0.5, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) ~~~~ --- # Tokenization The model uses the original llama-2 tokenizer. --- # How to cite ```bibtex @misc{rinna-youri-7b-instruction-gptq, title = {rinna/youri-7b-instruction-gptq}, author = {Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}, url = {https://huggingface.co./rinna/youri-7b-instruction-gptq} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # License [The llama2 license](https://ai.meta.com/llama/license/)