Ali-Forootani's picture
Update README.md
22684ef verified
|
raw
history blame
9.52 kB
---
library_name: transformers
tags: []
---
---
# Fine-Tuning LLaMA-2-7b with QLoRA on Custom Dataset
This repository provides a setup and script for fine-tuning the LLaMA-2-7b model using QLoRA (Quantized Low-Rank Adaptation) with custom datasets. The script is designed for efficiency and flexibility in training large language models (LLMs) by leveraging advanced techniques such as 4-bit quantization and LoRA.
## Overview
The script fine-tunes a pre-trained LLaMA-2-7b model using a custom dataset, applying QLoRA techniques to optimize performance. It utilizes the `transformers`, `datasets`, `peft`, and `trl` libraries for model management, data processing, and training. The setup includes support for mixed precision training, gradient checkpointing, and advanced quantization techniques to enhance the efficiency of the fine-tuning process.
## Components
### 1. Dependencies
Ensure the following libraries are installed:
- `torch`
- `datasets`
- `transformers`
- `peft`
- `trl`
Install them using pip if they are not already available:
```bash
pip install torch datasets transformers peft trl
```
### 2. Model and Dataset
- **Model**: The base model used is `LLaMA-2-7b`. The script loads this model from a specified local directory.
- **Dataset**: The training data is loaded from a specified directory. The dataset should be formatted in a way that the `"text"` field contains the training examples.
### 3. QLoRA Configuration
QLoRA parameters are used to configure the quantization and adaptation process:
- **LoRA Attention Dimension (`lora_r`)**: 64
- **LoRA Alpha Parameter (`lora_alpha`)**: 16
- **LoRA Dropout Probability (`lora_dropout`)**: 0.1
### 4. BitsAndBytes Configuration
Quantization settings for the model:
- **Use 4-bit Precision**: True
- **Compute Data Type**: `float16`
- **Quantization Type**: `nf4`
- **Nested Quantization**: False
### 5. Training Configuration
Training parameters are defined as follows:
- **Output Directory**: `./results`
- **Number of Epochs**: 300
- **Batch Size**: 4
- **Gradient Accumulation Steps**: 1
- **Learning Rate**: 2e-4
- **Weight Decay**: 0.001
- **Optimizer**: `paged_adamw_32bit`
- **Learning Rate Scheduler**: `cosine`
- **Gradient Clipping**: 0.3
- **Warmup Ratio**: 0.03
- **Logging Steps**: 25
- **Save Steps**: 0
### 6. Training and Evaluation
The script includes preprocessing of the dataset, model initialization with QLoRA, and training using `SFTTrainer` from the `trl` library. It supports mixed precision training and gradient checkpointing to enhance training efficiency.
### 7. Usage Instructions
1. **Update File Paths**: Adjust `model_name`, `dataset_name`, and `new_model` paths according to your environment.
2. **Run the Script**: Execute the script in your Python environment to start the fine-tuning process.
```bash
python fine_tune_llama.py
```
3. **Monitor Training**: Use TensorBoard or similar tools to monitor the training progress.
### 8. Model Saving
After training, the model is saved to the specified directory (`new_model`). This trained model can be loaded for further evaluation or deployment.
## Example Configuration
Here’s an example configuration used for fine-tuning:
```python
model_name = "/data/bio-eng-llm/llm_repo/NousResearch/Llama-2-7b-chat-hf"
dataset_name = "/data/bio-eng-llm/llm_repo/mlabonne/guanaco-llama2-1k"
new_model = "/data/bio-eng-llm/llm_repo/mlabonne/llama-2-7b-miniguanaco"
lora_r = 64
lora_alpha = 16
lora_dropout = 0.1
use_4bit = True
bnb_4bit_compute_dtype = "float16"
bnb_4bit_quant_type = "nf4"
use_nested_quant = False
output_dir = "./results"
num_train_epochs = 300
fp16 = False
bf16 = False
per_device_train_batch_size = 4
gradient_accumulation_steps = 1
gradient_checkpointing = True
max_grad_norm = 0.3
learning_rate = 2e-4
weight_decay = 0.001
optim = "paged_adamw_32bit"
lr_scheduler_type = "cosine"
max_steps = -1
warmup_ratio = 0.03
group_by_length = True
save_steps = 0
logging_steps = 25
```
## License
This repository is licensed under the [MIT License](LICENSE).
## Contact
For questions or issues, please contact [author](mailto:[email protected]).
---
This README provides a comprehensive guide to understanding and utilizing the script for fine-tuning the LLaMA-2-7b model using advanced techniques. Adjust file paths and parameters as needed based on your specific requirements.
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]