Model Card for gemma-2-2b-punjabi-finetuned-4
This model is a fine-tuned version of google/gemma-2-2b. It has been trained using TRL.
Quick start
gemma_tokenizer = AutoTokenizer.from_pretrained("amanpreetsingh459/gemma-2-2b-punjabi-finetuned-4")
EOS_TOKEN = gemma_tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
"amanpreetsingh459/gemma-2-2b-punjabi-finetuned-4",
device_map="auto",
torch_dtype=torch.bfloat16
)
alpaca_prompt = """
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = gemma_tokenizer(
[
alpaca_prompt.format(
"ਮੇਨੂ ਏਕ ਕਵਿਤਾ ਲਿੱਖ ਕੇ ਦੇਯੋ ਜੀ", # instruction
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 250)
decoded_outputs = gemma_tokenizer.batch_decode(outputs)
print(decoded_outputs[0])
Pipeline method
from transformers import pipeline
question = "ਮੈਨੂੰ ਇੱਕ ਕਵਿਤਾ ਲਿਖੋ" #write me a poem
generator = pipeline("text-generation", model="amanpreetsingh459/gemma-2-2b-punjabi-finetuned-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
LoRA Fine-tuning. This model was trained with SFT.
Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.4.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citations
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
@misc{gemma-language-tuning,
author = {Glenn Cameron and Lauren Usui and Paul Mooney and Addison Howard},
title = {Google - Unlock Global Communication with Gemma},
year = {2024},
howpublished = {\url{https://kaggle.com/competitions/gemma-language-tuning}},
note = {Kaggle}
}
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
@misc{PunjabiAlpaca,
author = {Sambit Sekhar and Shantipriya Parida},
title = {Punjabi Instruction Set Based on Alpaca},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co./OdiaGenAI}},
}
- Downloads last month
- 317
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for amanpreetsingh459/gemma-2-2b-punjabi-finetuned-4
Base model
google/gemma-2-2b