Edit model card

Model Card for Model ID

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on timdettmers/openassistant-guanaco dataset.

Model Details

Model Description

This is a fine-tuned version of the meta-llama/Llama-2-7b-hf model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on the Intel Gaudi 2 AI accelerator. This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.

  • Developed by: Keerthi Nalabotu
  • Model type: LLM
  • Language(s) (NLP): English
  • Finetuned from model [optional]: meta-llama/Llama-2-7b-hf

Uses

This model can be used for text generation tasks such as:

Chatbots

Automated content creation

Text completion and augmentation

Out-of-Scope Use

Use in real-time applications where latency is critical

Use in highly sensitive domains without thorough evaluation and testing

How to Get Started with the Model

Use the code below to get started with the model.

Training Details

Training regime: Mixed precision training using bf16

Number of epochs: 3

Learning rate: 1e-4

Batch size: 16

Seq length: 512

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

Hardware Type: Intel Gaudi AI Accelerator

Hours used: < 1 hour

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .