File size: 1,462 Bytes
8b554ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
language: en
tags:
- fine-tuned
- causal-lm
- instruction-following
model_type: causal-lm
license: mit
datasets:
- mlabonne/guanaco-llama2-1k
metrics:
- accuracy
- loss
---
# Fine-tuned Llama-2 Model on Guanaco Instruction Dataset
## Model Description
This model is a fine-tuned version of Llama-2 designed specifically for instruction-following tasks. It has been trained on the [Guanaco Llama-2 1k dataset](https://huggingface.co./datasets/mlabonne/guanaco-llama2-1k), enabling it to generate coherent and contextually appropriate responses based on given prompts. This model aims to enhance user interactions through improved understanding of instructions and queries.
## Intended Use
This model is suitable for various applications, including:
- Instruction-following tasks
- Chatbot interactions
- Text completion based on user prompts
- Educational tools for generating explanations or summaries
### How to Use
You can easily load this model using the Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "gautamraj8044/Llama-2-7b-chat-finetune"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example usage
input_text = "Please explain the concept of machine learning."
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|