SGEcon commited on
Commit
e38792e
1 Parent(s): 7e3ab48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -83,11 +83,11 @@ If you wish to use the original data, please contact the original author directl
83
 
84
  ## Training Details
85
 
86
- We use QLora to train the base model.
87
  Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
88
  The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
89
 
90
- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
91
  Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
92
 
93
 
 
83
 
84
  ## Training Details
85
 
86
+ - We use QLora to train the base model.
87
  Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
88
  The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
89
 
90
+ - Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
91
  Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
92
 
93