Edit model card

Model Card for llama2_telugu

Welcome to the eswardivi/llama2_telugu model page on Hugging Face. This model is a result of fine-tuning the powerful PosteriorAI/godavari-telugu-llama2-7B model on the ravithejads/telugu_alpaca_ft dataset to better serve the Telugu-speaking community.

Overview

  • Base Model: PosteriorAI/godavari-telugu-llama2-7B, a state-of-the-art Telugu language model based on the LLaMA architecture, offering advanced natural language understanding and generation capabilities.
  • Fine-Tuning Dataset: ravithejads/telugu_alpaca_ft, a curated dataset specifically designed to fine-tune language models for Telugu language tasks .
  • Target Application: Enhanced communication, education, and technology access for the Telugu-speaking community, addressing the gap in AI for Indic languages .

Usage

To use this model, you can leverage the Hugging Face API, SDK, or Transformers library. Below is a simple Python example using the Transformers library:

from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="eswardivi/llama2_telugu",
    device_map="auto",
    model_kwargs={"load_in_8bit": True}
)

def create_prompt(instruction: str, input: str = "") -> str:
    prompt = f"""
    You are a helpful assistant.
    ### Instruction:
    {instruction}

    ### Input:
    {input}

    ### Response:
    """
    return prompt

instruction = "Krindi samaacharam prakaram google news app eppudu release ayyindi?"
input = "Google News is a news aggregator service developed by Google. It presents a continuous flow of links to articles organized from thousands of publishers and magazines. Google News is available as an app on Android, iOS, and the Web. Google released a beta version in September 2002 and the official app in January 2006."

prompt = create_prompt(instruction, input)
print(prompt)
out = pipe(
    prompt,
    num_return_sequences=1,
    max_new_tokens=1024,
    temperature=0.7,
    top_p=0.9,
    do_sample=True
)

print(out[0]['generated_text'])

Training Details

This model was fine-tuned using Axolotl.

Contributions

Contributions to this model are welcome. Feel free to submit issues, feature requests, or pull requests via Huggingface.

Contact

For inquiries or collaborations, please contact the model maintainer at [email protected].


Downloads last month
19
Safetensors
Model size
6.88B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train eswardivi/llama2_telugu