File size: 4,252 Bytes
44681da
 
440fc16
 
 
 
 
74852cc
 
 
 
715268e
 
74852cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4d51fd
25ddbfa
b4d51fd
85decd9
714fa24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25ddbfa
b4d51fd
6dc9a17
74852cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a8eced
74852cc
 
 
 
17ff838
f83bdd9
f1f7e0a
 
 
 
 
 
f83bdd9
 
6dc9a17
 
74852cc
714fa24
17ff838
2beec0d
00836b3
2beec0d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: apache-2.0
datasets:
- Abirate/english_quotes
language:
- en
library_name: transformers
---

# Quantization 4Bits - 4.92 GB GPU memory usage for inference:

** Vide same fine-tuning for Llama2-7B-Chat: [https://huggingface.co./nlpulse/llama2-7b-chat-english_quotes](https://huggingface.co./nlpulse/llama2-7b-chat-english_quotes)

```
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01   Driver Version: 515.105.01   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   1  NVIDIA GeForce ...  Off  | 00000000:04:00.0 Off |                  N/A |
| 37%   70C    P2   163W / 170W |   4923MiB / 12288MiB |     91%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
```

## Fine-tuning
```
3 epochs, all dataset samples (split=train), 939 steps
1 x GPU NVidia RTX 3060 12GB - max. GPU memory: 7.44 GB
Duration: 1h45min

$ nvidia-smi && free -h
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01   Driver Version: 515.105.01   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   1  NVIDIA GeForce ...  Off  | 00000000:04:00.0 Off |                  N/A |
|100%   89C    P2   166W / 170W |   7439MiB / 12288MiB |     93%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
               total        used        free      shared  buff/cache   available
Mem:            77Gi        14Gi        23Gi        79Mi        39Gi        62Gi
Swap:           37Gi          0B        37Gi

```

## Inference 
```
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_path = "nlpulse/gpt-j-6b-english_quotes"

# tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token

# quantization config
quant_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

# model
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=quant_config, device_map={"":0})

# inference
device = "cuda"
text_list = ["Ask not what your country", "Be the change that", "You only live once, but", "I'm selfish, impatient and"]
for text in text_list:
    inputs = tokenizer(text, return_tensors="pt").to(device)
    outputs = model.generate(**inputs, max_new_tokens=60)
    print('>> ', text, " => ", tokenizer.decode(outputs[0], skip_special_tokens=True))

```

## Requirements
```
pip install -U bitsandbytes
pip install -U git+https://github.com/huggingface/transformers.git 
pip install -U git+https://github.com/huggingface/peft.git
pip install -U accelerate
pip install -U datasets
pip install -U scipy
```

## Scripts
[https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b](https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b)


## References
[QLoRa: Fine-Tune a Large Language Model on Your GPU](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b)

[Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co./blog/4bit-transformers-bitsandbytes)