|
--- |
|
library_name: transformers |
|
tags: [] |
|
--- |
|
|
|
|
|
|
|
--- |
|
|
|
# Fine-Tuning LLaMA-2-7b with QLoRA on Custom Dataset |
|
|
|
This repository provides a setup and script for fine-tuning the LLaMA-2-7b model using QLoRA (Quantized Low-Rank Adaptation) with custom datasets. The script is designed for efficiency and flexibility in training large language models (LLMs) by leveraging advanced techniques such as 4-bit quantization and LoRA. |
|
|
|
## Overview |
|
|
|
The script fine-tunes a pre-trained LLaMA-2-7b model using a custom dataset, applying QLoRA techniques to optimize performance. It utilizes the `transformers`, `datasets`, `peft`, and `trl` libraries for model management, data processing, and training. The setup includes support for mixed precision training, gradient checkpointing, and advanced quantization techniques to enhance the efficiency of the fine-tuning process. |
|
|
|
## Components |
|
|
|
### 1. Dependencies |
|
|
|
Ensure the following libraries are installed: |
|
- `torch` |
|
- `datasets` |
|
- `transformers` |
|
- `peft` |
|
- `trl` |
|
|
|
Install them using pip if they are not already available: |
|
```bash |
|
pip install torch datasets transformers peft trl |
|
``` |
|
|
|
### 2. Model and Dataset |
|
|
|
- **Model**: The base model used is `LLaMA-2-7b`. The script loads this model from a specified local directory. |
|
- **Dataset**: The training data is loaded from a specified directory. The dataset should be formatted in a way that the `"text"` field contains the training examples. |
|
|
|
### 3. QLoRA Configuration |
|
|
|
QLoRA parameters are used to configure the quantization and adaptation process: |
|
- **LoRA Attention Dimension (`lora_r`)**: 64 |
|
- **LoRA Alpha Parameter (`lora_alpha`)**: 16 |
|
- **LoRA Dropout Probability (`lora_dropout`)**: 0.1 |
|
|
|
### 4. BitsAndBytes Configuration |
|
|
|
Quantization settings for the model: |
|
- **Use 4-bit Precision**: True |
|
- **Compute Data Type**: `float16` |
|
- **Quantization Type**: `nf4` |
|
- **Nested Quantization**: False |
|
|
|
### 5. Training Configuration |
|
|
|
Training parameters are defined as follows: |
|
- **Output Directory**: `./results` |
|
- **Number of Epochs**: 300 |
|
- **Batch Size**: 4 |
|
- **Gradient Accumulation Steps**: 1 |
|
- **Learning Rate**: 2e-4 |
|
- **Weight Decay**: 0.001 |
|
- **Optimizer**: `paged_adamw_32bit` |
|
- **Learning Rate Scheduler**: `cosine` |
|
- **Gradient Clipping**: 0.3 |
|
- **Warmup Ratio**: 0.03 |
|
- **Logging Steps**: 25 |
|
- **Save Steps**: 0 |
|
|
|
### 6. Training and Evaluation |
|
|
|
The script includes preprocessing of the dataset, model initialization with QLoRA, and training using `SFTTrainer` from the `trl` library. It supports mixed precision training and gradient checkpointing to enhance training efficiency. |
|
|
|
### 7. Usage Instructions |
|
|
|
1. **Update File Paths**: Adjust `model_name`, `dataset_name`, and `new_model` paths according to your environment. |
|
2. **Run the Script**: Execute the script in your Python environment to start the fine-tuning process. |
|
|
|
```bash |
|
python fine_tune_llama.py |
|
``` |
|
|
|
3. **Monitor Training**: Use TensorBoard or similar tools to monitor the training progress. |
|
|
|
### 8. Model Saving |
|
|
|
After training, the model is saved to the specified directory (`new_model`). This trained model can be loaded for further evaluation or deployment. |
|
|
|
## Example Configuration |
|
|
|
Here’s an example configuration used for fine-tuning: |
|
|
|
_hint_: the base model is: NousResearch/Llama-2-7b-chat-hf |
|
_hint_: the dataset is: mlabonne/guanaco-llama2-1k |
|
|
|
_hint_: I saved them on my local machine then laod them! you can directly download them from huggingface |
|
|
|
```python |
|
model_name = "/data/bio-eng-llm/llm_repo/NousResearch/Llama-2-7b-chat-hf" # the base model is: NousResearch/Llama-2-7b-chat-hf |
|
dataset_name = "/data/bio-eng-llm/llm_repo/mlabonne/guanaco-llama2-1k" # the dataset is: mlabonne/guanaco-llama2-1k |
|
new_model = "/data/bio-eng-llm/llm_repo/mlabonne/llama-2-7b-miniguanaco" |
|
|
|
lora_r = 64 |
|
lora_alpha = 16 |
|
lora_dropout = 0.1 |
|
|
|
use_4bit = True |
|
bnb_4bit_compute_dtype = "float16" |
|
bnb_4bit_quant_type = "nf4" |
|
use_nested_quant = False |
|
|
|
output_dir = "./results" |
|
num_train_epochs = 300 |
|
fp16 = False |
|
bf16 = False |
|
per_device_train_batch_size = 4 |
|
gradient_accumulation_steps = 1 |
|
gradient_checkpointing = True |
|
max_grad_norm = 0.3 |
|
learning_rate = 2e-4 |
|
weight_decay = 0.001 |
|
optim = "paged_adamw_32bit" |
|
lr_scheduler_type = "cosine" |
|
max_steps = -1 |
|
warmup_ratio = 0.03 |
|
group_by_length = True |
|
save_steps = 0 |
|
logging_steps = 25 |
|
``` |
|
|
|
|
|
|
|
# The entire Python training module: |
|
|
|
```python |
|
|
|
|
|
import os |
|
import torch |
|
from datasets import load_dataset |
|
from transformers import ( |
|
AutoModelForCausalLM, |
|
AutoTokenizer, |
|
BitsAndBytesConfig, |
|
HfArgumentParser, |
|
TrainingArguments, |
|
pipeline, |
|
logging, |
|
) |
|
from peft import LoraConfig, PeftModel |
|
from trl import SFTTrainer |
|
|
|
|
|
|
|
import sys |
|
import os |
|
|
|
cwd = os.getcwd() |
|
# sys.path.append(cwd + '/my_directory') |
|
sys.path.append(cwd) |
|
|
|
|
|
def setting_directory(depth): |
|
current_dir = os.path.abspath(os.getcwd()) |
|
root_dir = current_dir |
|
for i in range(depth): |
|
root_dir = os.path.abspath(os.path.join(root_dir, os.pardir)) |
|
sys.path.append(os.path.dirname(root_dir)) |
|
return root_dir |
|
|
|
################################# |
|
#S:\Llavar_repo\LLaVA\NousResearch\Llama-2-7b-chat-hf |
|
|
|
# The model that you want to train from the Hugging Face hub |
|
|
|
|
|
|
|
model_name = "/data/bio-eng-llm/llm_repo/NousResearch/Llama-2-7b-chat-hf" |
|
|
|
|
|
#model_name = setting_directory(2) + "\\Llavar_repo\\LLaVA\NousResearch\\Llama-2-7b-chat-hf" |
|
|
|
|
|
|
|
# The instruction dataset to use |
|
dataset_name = "/data/bio-eng-llm/llm_repo/mlabonne/guanaco-llama2-1k" |
|
|
|
# Fine-tuned model name |
|
new_model = "/data/bio-eng-llm/llm_repo/mlabonne/llama-2-7b-miniguanaco" |
|
|
|
################################################################################ |
|
# QLoRA parameters |
|
################################################################################ |
|
|
|
# LoRA attention dimension |
|
lora_r = 64 |
|
|
|
# Alpha parameter for LoRA scaling |
|
lora_alpha = 16 |
|
|
|
# Dropout probability for LoRA layers |
|
lora_dropout = 0.1 |
|
|
|
################################################################################ |
|
# bitsandbytes parameters |
|
################################################################################ |
|
|
|
# Activate 4-bit precision base model loading |
|
use_4bit = True |
|
|
|
# Compute dtype for 4-bit base models |
|
bnb_4bit_compute_dtype = "float16" |
|
|
|
# Quantization type (fp4 or nf4) |
|
bnb_4bit_quant_type = "nf4" |
|
|
|
# Activate nested quantization for 4-bit base models (double quantization) |
|
use_nested_quant = False |
|
|
|
################################################################################ |
|
# TrainingArguments parameters |
|
################################################################################ |
|
|
|
# Output directory where the model predictions and checkpoints will be stored |
|
output_dir = "./results" |
|
|
|
# Number of training epochs |
|
num_train_epochs = 300 |
|
|
|
# Enable fp16/bf16 training (set bf16 to True with an A100) |
|
fp16 = False |
|
bf16 = False |
|
|
|
# Batch size per GPU for training |
|
per_device_train_batch_size = 4 |
|
|
|
# Batch size per GPU for evaluation |
|
per_device_eval_batch_size = 4 |
|
|
|
# Number of update steps to accumulate the gradients for |
|
gradient_accumulation_steps = 1 |
|
|
|
# Enable gradient checkpointing |
|
gradient_checkpointing = True |
|
|
|
# Maximum gradient normal (gradient clipping) |
|
max_grad_norm = 0.3 |
|
|
|
# Initial learning rate (AdamW optimizer) |
|
learning_rate = 2e-4 |
|
|
|
# Weight decay to apply to all layers except bias/LayerNorm weights |
|
weight_decay = 0.001 |
|
|
|
# Optimizer to use |
|
optim = "paged_adamw_32bit" |
|
|
|
# Learning rate schedule |
|
lr_scheduler_type = "cosine" |
|
|
|
# Number of training steps (overrides num_train_epochs) |
|
max_steps = -1 |
|
|
|
# Ratio of steps for a linear warmup (from 0 to learning rate) |
|
warmup_ratio = 0.03 |
|
|
|
# Group sequences into batches with same length |
|
# Saves memory and speeds up training considerably |
|
group_by_length = True |
|
|
|
# Save checkpoint every X updates steps |
|
save_steps = 0 |
|
|
|
# Log every X updates steps |
|
logging_steps = 25 |
|
|
|
################################################################################ |
|
# SFT parameters |
|
################################################################################ |
|
|
|
# Maximum sequence length to use |
|
max_seq_length = None |
|
|
|
# Pack multiple short examples in the same input sequence to increase efficiency |
|
packing = False |
|
|
|
# Load the entire model on the GPU 0 |
|
device_map = {"": 0} |
|
|
|
|
|
|
|
################################################################################ |
|
|
|
|
|
# Load dataset (you can process it here) |
|
dataset = load_dataset(dataset_name, split="train") |
|
|
|
print(dataset[0].keys()) # This will print all the field names in your dataset |
|
|
|
# Load tokenizer and model with QLoRA configuration |
|
compute_dtype = getattr(torch, bnb_4bit_compute_dtype) |
|
|
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=use_4bit, |
|
bnb_4bit_quant_type=bnb_4bit_quant_type, |
|
bnb_4bit_compute_dtype=compute_dtype, |
|
bnb_4bit_use_double_quant=use_nested_quant, |
|
) |
|
|
|
# Check GPU compatibility with bfloat16 |
|
if compute_dtype == torch.float16 and use_4bit: |
|
major, _ = torch.cuda.get_device_capability() |
|
if major >= 8: |
|
print("=" * 80) |
|
print("Your GPU supports bfloat16: accelerate training with bf16=True") |
|
print("=" * 80) |
|
|
|
# Load base model |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
quantization_config=bnb_config, |
|
device_map=device_map |
|
) |
|
model.config.use_cache = False |
|
model.config.pretraining_tp = 1 |
|
|
|
# Load LLaMA tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training |
|
|
|
# Load LoRA configuration |
|
peft_config = LoraConfig( |
|
lora_alpha=lora_alpha, |
|
lora_dropout=lora_dropout, |
|
r=lora_r, |
|
bias="none", |
|
task_type="CAUSAL_LM", |
|
) |
|
|
|
# Set training parameters |
|
training_arguments = TrainingArguments( |
|
output_dir=output_dir, |
|
num_train_epochs=num_train_epochs, |
|
per_device_train_batch_size=per_device_train_batch_size, |
|
gradient_accumulation_steps=gradient_accumulation_steps, |
|
optim=optim, |
|
save_steps=save_steps, |
|
logging_steps=logging_steps, |
|
learning_rate=learning_rate, |
|
weight_decay=weight_decay, |
|
fp16=fp16, |
|
bf16=bf16, |
|
max_grad_norm=max_grad_norm, |
|
max_steps=max_steps, |
|
warmup_ratio=warmup_ratio, |
|
group_by_length=group_by_length, |
|
lr_scheduler_type=lr_scheduler_type, |
|
report_to="tensorboard" |
|
) |
|
|
|
# Set supervised fine-tuning parameters |
|
|
|
def preprocess_function(examples): |
|
return tokenizer(examples["text"], truncation=True, max_length=512) |
|
|
|
tokenized_dataset = dataset.map(preprocess_function, batched=True) |
|
|
|
trainer = SFTTrainer( |
|
model=model, |
|
train_dataset=tokenized_dataset, |
|
peft_config=peft_config, |
|
tokenizer=tokenizer, |
|
args=training_arguments, |
|
packing=packing, |
|
) |
|
|
|
# Train model |
|
trainer.train() |
|
|
|
# Save trained model |
|
trainer.model.save_pretrained(new_model) |
|
``` |
|
|
|
|
|
|
|
|
|
|
|
## License |
|
|
|
This repository is licensed under the [MIT License](LICENSE). |
|
|
|
## Contact |
|
|
|
For questions or issues, please contact [author](mailto:[email protected]). |
|
|
|
--- |
|
|
|
This README provides a comprehensive guide to understanding and utilizing the script for fine-tuning the LLaMA-2-7b model using advanced techniques. Adjust file paths and parameters as needed based on your specific requirements. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** [More Information Needed] |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Model type:** [More Information Needed] |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model [optional]:** [More Information Needed] |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [More Information Needed] |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
[More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
[More Information Needed] |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Preprocessing [optional] |
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
[More Information Needed] |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
[More Information Needed] |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** [More Information Needed] |
|
- **Hours used:** [More Information Needed] |
|
- **Cloud Provider:** [More Information Needed] |
|
- **Compute Region:** [More Information Needed] |
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|
|
### Compute Infrastructure |
|
|
|
[More Information Needed] |
|
|
|
#### Hardware |
|
|
|
[More Information Needed] |
|
|
|
#### Software |
|
|
|
[More Information Needed] |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
[More Information Needed] |
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] |