--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit pipeline_tag: text-generation --- ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### How to use ```python # Prompt alpaca_prompt = """Di bawah ini adalah instruksi yang menjelaskan tugas, dipasangkan dengan masukan yang memberikan konteks lebih lanjut. Tulis tanggapan yang melengkapi instruksi dengan tepat. ### Instruksi: {} ### Masukan: {} ### Tanggapan: {}""" max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. if True: from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "indo-gemma-2b-alpaca", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference inputs = tokenizer( [ alpaca_prompt.format( "Sebutkan langkah-langkah membuat nasi goreng!", "", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt" ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 256) ``` ### Uploaded model - **Developed by:** firqaaa - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit