ruslanmv's picture
Update README.md
ebaa21d verified
---
language:
- en
- it
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Meta LLaMA 3.1 8B 4-bit Finetuned Model
This model is a fine-tuned version of `Meta-Llama-3.1-8B`, developed by **ruslanmv** for text generation tasks. It leverages 4-bit quantization, making it more efficient for inference while maintaining strong performance in natural language generation.
---
## Model Details
- **Base Model**: `unsloth/meta-llama-3.1-8b-bnb-4bit`
- **Finetuned by**: ruslanmv
- **Language**: English
- **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Tags**:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## Model Usage
### Installation
To use this model, you will need to install the necessary libraries:
```bash
pip install transformers accelerate bitsandbytes
```
### Loading the Model in Python
Here’s an example of how to load this fine-tuned model using Hugging Face's `transformers` library:
```python
#!pip install bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
# Define the quantization config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
# Ensure you have the right device setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the model and tokenizer from the Hugging Face Hub with BitsAndBytesConfig
model_name = "ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL-4bit"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define EOS token for terminating the sequences
EOS_TOKEN = tokenizer.eos_token
# Define Alpaca-style prompt template
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
"""
# Format the prompt without the response part
prompt = alpaca_prompt.format(
"Provide the SQL query",
"Seleziona tutte le colonne della tabella table1 dove la colonna anni è uguale a 2020"
)
# Tokenize the prompt and generate text
inputs = tokenizer([prompt], return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True)
# Decode the generated text
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
# Extract the generated response only (remove the prompt part)
response_start = generated_text.find("### Response:") + len("### Response:\n")
response = generated_text[response_start:].strip()
# Print the response (excluding the prompt)
print(response)
```
and the ansewer is
```
SELECT * FROM table1 WHERE anni = 2020
```
### Model Features
- **Text Generation**: This model is fine-tuned to generate coherent and contextually accurate text based on the provided input.
- **Efficiency**: Using 4-bit quantization with the `bitsandbytes` library, it optimizes memory and inference performance.
### License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). You are free to use, modify, and distribute this model, provided that you comply with the license terms.
### Acknowledgments
This model was fine-tuned by **ruslanmv** based on the original work of `unsloth` and the `meta-llama-3.1-8b-bnb-4bit` model.