--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: llama3 language: - ja - en tags: - llama - llama-3 - gptq inference: false base_model: rinna/llama-3-youko-70b-instruct base_model_relation: quantized --- # `Llama 3 Youko 70B Instruct GPTQ (rinna/llama-3-youko-70b-instruct-gptq)` ![rinna-icon](./rinna.png) # Overview rinna/llama-3-youko-70b-instruct-gptq is the quantized model for [rinna/llama-3-youko-70b-instruct](https://huggingface.co./rinna/llama-3-youko-70b-instruct) using [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ). The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference. | Size | Continual Pre-Training | Instruction-Tuning | | :- | :- | :- | | 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co./rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co./rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co./rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co./rinna/llama-3-youko-8b-instruct-gptq) | | 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co./rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co./rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co./rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co./rinna/llama-3-youko-70b-instruct-gptq) | * **Training: Built with Meta Llama 3** See [rinna/llama-3-youko-70b-instruct](https://huggingface.co./rinna/llama-3-youko-70b-instruct) for details about model architecture and data. * **Contributors** - [Toshiaki Wakatsuki](https://huggingface.co./t-w) - [Koh Mitsuda](https://huggingface.co./mitsu-koh) - [Xinqi Chen](https://huggingface.co./Keely0419) - [Kei Sawada](https://huggingface.co./keisawada) --- # Benchmarking Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html). --- # How to use the model We found this instruction-tuned model tends to generate repeated text more often than its base counterpart, and thus we set repetition_penalty=1.1 for better generation performance. The same repetition penalty was applied to the instruction-tuned model in the aforementioned evaluation experiments. ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "rinna/llama-3-youko-70b-instruct-gptq" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", ) messages = [ {"role": "system", "content": "あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。"}, {"role": "user", "content": "西田幾多郎とはどんな人物ですか?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.convert_tokens_to_ids("<|end_of_text|>"), tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=512, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, repetition_penalty=1.1, ) response = outputs[0][input_ids.shape[-1]:] response = tokenizer.decode(response, skip_special_tokens=True) print(response) ~~~~ --- # Tokenization The model uses the original [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-70B-Instruct) tokenizer. --- # How to cite ```bibtex @misc{rinna-llama-3-youko-70b-instruct-gptq, title = {rinna/llama-3-youko-70b-instruct-gptq}, author = {Wakatsuki, Toshiaki and Mitsuda, Koh and Chen, Xinqi and Sawada, Kei}, url = {https://huggingface.co./rinna/llama-3-youko-70b-instruct-gptq} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # References ```bibtex @article{llama3modelcard, title = {Llama 3 Model Card}, author = {AI@Meta}, year = {2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } @article{frantar2022gptq, title = {{GPTQ}: Accurate Post-training Compression for Generative Pretrained Transformers}, author = {Frantar, Elias and Ashkboos, Saleh and Hoefler, Torsten and Alistarh, Dan}, year = {2022}, url = {https://arxiv.org/abs/2210.17323} } ``` --- # License [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)