--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: llama2 datasets: - mc4 - cc100 - oscar - wikipedia - EleutherAI/pile language: - ja - en tags: - gptq inference: false base_model: rinna/youri-7b base_model_relation: quantized --- # `rinna/youri-7b-gptq` ![rinna-icon](./rinna.png) # Overview `rinna/youri-7b-gptq` is the quantized model for [`rinna/youri-7b`](https://huggingface.co./rinna/youri-7b) using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference. * **Library** Refer to the [original model](https://huggingface.co./rinna/youri-7b) for library details. * **Model architecture** Refer to the [original model](https://huggingface.co./rinna/youri-7b) for architecture details. * **Continual pre-training** Refer to the [original model](https://huggingface.co./rinna/youri-7b) for pre-training details. * **Contributors** - [Toshiaki Wakatsuki](https://huggingface.co./t-w) - [Tianyu Zhao](https://huggingface.co./tianyuz) - [Kei Sawada](https://huggingface.co./keisawada) --- # Benchmarking Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html). --- # How to use the model ~~~~python import torch from transformers import AutoTokenizer from auto_gptq import AutoGPTQForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-gptq") model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-gptq", use_safetensors=True) text = "西田幾多郎は、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( input_ids=token_ids.to(model.device), max_new_tokens=200, min_new_tokens=200, do_sample=True, temperature=1.0, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) ~~~~ --- # Tokenization The model uses the original llama-2 tokenizer. --- # How to cite ```bibtex @misc{rinna-youri-7b-gptq, title = {rinna/youri-7b-gptq}, author = {Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}, url = {https://huggingface.co./rinna/youri-7b-gptq} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # License [The llama2 license](https://ai.meta.com/llama/license/)