File size: 7,681 Bytes
c100893 c6b0748 ad2fbdd c100893 96dfd64 63bab77 c100893 758d33d 96dfd64 c100893 1ffd489 acfc617 c100893 758d33d acfc617 6bb0093 c100893 d2b1968 b13807a 59d887f 0dca21d 1ffd489 d2b1968 63bab77 0dca21d c6b0748 0dca21d 1ffd489 10adf94 0dca21d 10adf94 0dca21d 10adf94 0dca21d 1ffd489 96dfd64 c6b0748 96dfd64 c6b0748 96dfd64 c6b0748 1ffd489 c6b0748 1ffd489 6bb0093 1ffd489 c100893 0e5835a 455e8c7 459f367 0e5835a 459f367 0e5835a 455e8c7 03eaa5b 0e5835a c100893 63bab77 96dfd64 c100893 758d33d 0e5835a c100893 4f207c8 0e5835a c100893 0e5835a 96dfd64 c100893 63bab77 6bb0093 63bab77 6bb0093 c100893 96dfd64 0e5835a 96dfd64 c100893 96dfd64 f0e5c4b c100893 0e5835a c100893 f0e5c4b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
library_name: transformers
license: cc-by-nc-4.0
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
language:
- ko
- en
tags:
- Economic
- Finance
base_model: yanolja/KoSOLAR-10.7B-v0.2
---
## Model Details
Model Developers: Sogang University SGEconFinlab(<<https://sc.sogang.ac.kr/aifinlab/>)
## Model Description
This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data.
The data sources are listed below, and we are not releasing the data that we trained on because it was used for research/policy purposes.
If you wish to use the original data, please contact the original author directly for permission to use it.
- **Developed by:** Sogang University SGEconFinlab(<https://sc.sogang.ac.kr/aifinlab/>)
- **License:** cc-by-nc-4.0
- **Base Model:** yanolja/KoSOLAR-10.7B-v0.2(<https://huggingface.co./yanolja/KoSOLAR-10.7B-v0.2>)
## Loading the Model
peft_model_id = "SGEcon/EconFinKoSOLAR-10.7B_SFT"
config = PeftConfig.from_pretrained(peft_model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0})
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model.eval()
## Conducting Conversation
import re
def gen(x):
inputs = tokenizer(f"### μ§λ¬Έ: {x}\n\n### λ΅λ³:", return_tensors='pt', return_token_type_ids=False)
# Move data to GPU (if available)
inputs = {k: v.to(device="cuda" if torch.cuda.is_available() else "cpu") for k, v in inputs.items()}
gened = model.generate(
**inputs,
max_new_tokens=256, # Maximum number of new tokens to create
early_stopping=True,
num_return_sequences=1, # Generate only one answer
do_sample=True, # Enable sampling to generate a variety of answers
eos_token_id=tokenizer.eos_token_id, # Using EOS Token IDs
temperature=0.9, # This option is adjustable.
top_p=0.8, # This option is adjustable.
top_k=100 # This option is adjustable.
)
# Decode the generated sequence and convert it to output text
decoded = tokenizer.decode(gened[0], skip_special_tokens=True).strip()
# Extract only text after a string "### λ΅λ³:"
answer_start_idx = decoded.find("### λ΅λ³:") + len("### λ΅λ³:")
complete_answer = decoded[answer_start_idx:].strip()
# Find the first punctuation mark (. ? !) and extract only up to it
match = re.search(r"[\.\?\!][^\.\?\!]*$", complete_answer)
if match:
complete_answer = complete_answer[:match.end()].strip()
return complete_answer
## Training Details
- We train our model with PEFT.
PEFT is a technique that does not tune all parameters of a model during fine-tuning, but only a small subset of parameters.
By tuning only a few parameters while leaving others fixed, the model is less likely to suffer from catastrophic forgetting, where the model forgets previously learned tasks when it learns new ones.
This significantly reduces computation and storage costs.
- We use QLora to train the base model.
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
In other words, instruction tuning involves fine-tuning a pre-trained model for a specific task or set of tasks, where the model is taught to follow specific instructions or guidelines.
Instruction tuning is a type of Supervised Fine-tuning (SFT) that aims to improve the generality and adaptability of a model by introducing an additional dimension that enables the model to understand and follow specific instructions.
## Training Data
1. νκ΅μν: κ²½μ κΈμ΅μ©μ΄ 700μ (<https://www.bok.or.kr/portal/bbs/B0000249/view.do?nttId=235017&menuNo=200765>)
2. κΈμ΅κ°λ
μ: κΈμ΅μλΉμ μ 보 ν¬νΈ νμΈ κΈμ΅μ©μ΄μ¬μ (<https://fine.fss.or.kr/fine/fnctip/fncDicary/list.do?menuNo=900021>)
3. KDI κ²½μ μ 보μΌν°: μμ¬ μ©μ΄μ¬μ (<https://eiec.kdi.re.kr/material/wordDic.do>)
4. νκ΅κ²½μ μ λ¬Έ/νκ²½λ·μ»΄: νκ²½κ²½μ μ©μ΄μ¬μ (<https://terms.naver.com/list.naver?cid=42107&categoryId=42107>), μ€λμ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=1>), μ€λμ μ£Όλμ΄ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=5>), μκΈμκΈνκ²½(<https://sgsg.hankyung.com/tesat/study>)
5. μ€μλ²€μ²κΈ°μ
λΆ/λνλ―Όκ΅μ λΆ: μ€μλ²€μ²κΈ°μ
λΆ μ λ¬Έμ©μ΄(<https://terms.naver.com/list.naver?cid=42103&categoryId=42103>)
6. κ³ μ±μΌ/λ²λ¬ΈμΆνμ¬: νκ³Β·μΈλ¬΄ μ©μ΄μ¬μ (<https://terms.naver.com/list.naver?cid=51737&categoryId=51737>)
7. 맨νμ κ²½μ ν 8ν Word Index
8. kyujinpy/KOR-OpenOrca-Platypus-v3(<https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3>)
At the request of the original author, it is not to be used for commercial purposes. Therefore, it is licensed under the license CC-BY-NC-4.0.
The copyright of the data used belongs to the original author, so please contact the original author when using it.
## Training Hyperparameters
|Hyperparameter|SGEcon/KoSOLAR-10.7B-v0.2_fin_v4|
|------|---|
|Lora Method|Lora|
|load in 4 bit|True|
|learning rate|1e-5|
|lr scheduler|linear|
|lora alpa|16|
|lora rank|16|
|lora dropout|0.05|
|optim|paged_adamw_32bit|
|target_modules|q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head|
## License
The language identification model is distributed under the Creative Commons Attribution-NonCommercial 4.0 International Public License.
## Example
We only removed duplicate sentences.
> μ€μμνμ μν μ λν΄μ μ€λͺ
ν΄μ€λ?
>> μ€μμνμ κ΅κ°κ° λ°ννλ ννμ μ ν΅κ³Ό κ²½μ μ μμ μ μ΄μμ μν΄ μ€λ¦½λ κ΅κ°κΈ°κ΄μ
λλ€. μ€μμνμ β κ²½μ μ μμ μ μ΄μμ μν΄ κΈμ΅κΈ°κ΄μ κ°λ
νκ³ , ννμ λ°νκ³Ό ν΅νμ μ±
μ μ€μνλ κΈ°λ₯μ ν©λλ€. β‘ ν΅νμ μ±
μ μ€νμλ μ€μμνμ΄ μ λμ±(Liquidity)μ 곡κΈνκ³ , μ΄μμ¨μ μ‘°μ νμ¬ μμμ 곡κΈμ μ‘°μ νλ μΌ, μκΈμ€λΉκΈμ μ μ§νλ μΌ, νμ¨μ‘°μ λ±μ΄ ν¬ν¨λ©λλ€. β’ μ€μμνμ κΈμ΅κΈ°κ΄μ λμΆνλ μ΄κ³ΌλμΆ, μΈνκ±°λ, μ§νμ λ°νκ³Ό κ΅νμ κ΄λ¦¬νλ μ
무λ λ΄λΉν©λλ€. β£ ν΅νμ μ±
μ μ€μλ ννμ μ μ©μ λ°ν λ° κ³΅κΈμ ν΅μ λ₯Ό ν΅ν΄ λ¬Όκ°μμ κ³Ό κ³ μ©μ μ¦λνκ³ κ²½μ μ κ· νμ±μ₯μ λλͺ¨νλ κ²μ λͺ©νλ‘ νκ³ μμ΅λλ€. |