|
--- |
|
language: en |
|
tags: |
|
- fine-tuned |
|
- causal-lm |
|
- instruction-following |
|
model_type: causal-lm |
|
license: mit |
|
datasets: |
|
- mlabonne/guanaco-llama2-1k |
|
metrics: |
|
- accuracy |
|
- loss |
|
--- |
|
|
|
# Fine-tuned Llama-2 Model on Guanaco Instruction Dataset |
|
|
|
## Model Description |
|
This model is a fine-tuned version of Llama-2 designed specifically for instruction-following tasks. It has been trained on the [Guanaco Llama-2 1k dataset](https://huggingface.co./datasets/mlabonne/guanaco-llama2-1k), enabling it to generate coherent and contextually appropriate responses based on given prompts. This model aims to enhance user interactions through improved understanding of instructions and queries. |
|
|
|
## Intended Use |
|
This model is suitable for various applications, including: |
|
- Instruction-following tasks |
|
- Chatbot interactions |
|
- Text completion based on user prompts |
|
- Educational tools for generating explanations or summaries |
|
|
|
### How to Use |
|
You can easily load this model using the Transformers library: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "gautamraj8044/Llama-2-7b-chat-finetune" |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
# Example usage |
|
input_text = "Please explain the concept of machine learning." |
|
inputs = tokenizer.encode(input_text, return_tensors="pt") |
|
outputs = model.generate(inputs) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|