Update README.md
Browse files
README.md
CHANGED
@@ -83,13 +83,19 @@ If you wish to use the original data, please contact the original author directl
|
|
83 |
|
84 |
## Training Details
|
85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
- We use QLora to train the base model.
|
87 |
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
|
88 |
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
|
89 |
|
90 |
- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
|
91 |
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
|
92 |
-
|
93 |
|
94 |
|
95 |
|
|
|
83 |
|
84 |
## Training Details
|
85 |
|
86 |
+
|
87 |
+
- We train our model with PEFT.
|
88 |
+
PEFT is a technique that does not tune all parameters of a model during fine-tuning, but only a small subset of parameters.
|
89 |
+
By tuning only a few parameters while leaving others fixed, the model is less likely to suffer from catastrophic forgetting, where the model forgets previously learned tasks when it learns new ones.
|
90 |
+
This significantly reduces computation and storage costs.
|
91 |
+
|
92 |
- We use QLora to train the base model.
|
93 |
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
|
94 |
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
|
95 |
|
96 |
- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
|
97 |
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
|
98 |
+
In other words, instruction tuning involves fine-tuning a pre-trained model for a specific task or set of tasks, where the model is taught to follow specific instructions or guidelines.
|
99 |
|
100 |
|
101 |
|