--- language: - ja license: other tags: - text-generation-inference - transformers - unsloth - trl - gemma datasets: - kunishou/amenokaku-code-instruct license_name: gemma base_model: unsloth/gemma-2b-it-bnb-4bit --- # Uploaded model - **Developed by:** taoki - **License:** gemma - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained( "taoki/gemma-2b-it-qlora-amenokaku-code" ) model = AutoModelForCausalLM.from_pretrained( "taoki/gemma-2b-it-qlora-amenokaku-code" ) if torch.cuda.is_available(): model = model.to("cuda") prompt="""user 紫式部と清少納言の作風をjsonで出力してください。 model """ input_ids = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **input_ids, max_new_tokens=512, do_sample=True, top_p=0.95, temperature=0.1, repetition_penalty=1.0, ) print(tokenizer.decode(outputs[0])) ``` # Output ```` user 紫式部と清少納言の作風をjsonで出力してください。 model ```json { "紫式部": { "style": "紫式部", "name": "紫式部", "description": "紫式部の作風" }, "清少納言": { "style": "清少納言", "name": "清少納言", "description": "清少納言の作風" } } ``` ```` This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)