File size: 7,897 Bytes
0a08778
 
6d9bbcd
 
 
 
 
 
 
 
 
42c6ca3
0a08778
 
 
6d9bbcd
 
0a08778
 
 
 
6d9bbcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a08778
 
c322688
 
 
 
 
 
214dc6b
6d9bbcd
 
0a08778
214dc6b
6d9bbcd
c322688
dc1b6cf
0a08778
6d9bbcd
 
0a08778
6d9bbcd
 
 
 
 
 
 
 
0a08778
 
af311fa
6d9bbcd
0a08778
 
 
 
6d9bbcd
0a08778
6d9bbcd
 
 
 
 
 
 
 
 
 
0a08778
6d9bbcd
0a08778
6d9bbcd
0a08778
6d9bbcd
0a08778
8ba6760
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
library_name: transformers
license: cc-by-nc-4.0
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
language:
- ko
- en
tags:
- Economic
- Finance
base_model: EleutherAI/polyglot-ko-5.8b
---


# Model Details
Model Developers: Sogang University SGEconFinlab(<<https://sc.sogang.ac.kr/aifinlab/>)


### Model Description

This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data.
The data sources are listed below, and we are not releasing the data that we trained on because it was used for research/policy purposes. 
If you wish to use the original data, please contact the original author directly for permission to use it.

- **Developed by:** Sogang University SGEconFinlab(<https://sc.sogang.ac.kr/aifinlab/>)
- **License:** cc-by-nc-4.0
- **Base Model:** EleutherAI/polyglot-ko-5.8b(<https://huggingface.co./EleutherAI/polyglot-ko-5.8b>)

## Loading the Model

    peft_model_id = "SGEcon/polyglot-ko-5.8b_fin_v4"
    config = PeftConfig.from_pretrained(peft_model_id)
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=torch.bfloat16
    )
    model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0})
    model = PeftModel.from_pretrained(model, peft_model_id)
    tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
    model.eval()

## Conducting Conversation

    import re

    def gen(x):
        inputs = tokenizer(f"### ์งˆ๋ฌธ: {x}\n\n### ๋‹ต๋ณ€:", return_tensors='pt', return_token_type_ids=False)
    
        # ๋ฐ์ดํ„ฐ๋ฅผ GPU๋กœ ์ด๋™(์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฒฝ์šฐ)
        inputs = {k: v.to(device="cuda" if torch.cuda.is_available() else "cpu") for k, v in inputs.items()}

        gened = model.generate(
            **inputs,
            max_new_tokens=256,  # ์ƒˆ๋กœ ์ƒ์„ฑํ•  ํ† ํฐ์˜ ์ตœ๋Œ€ ๊ฐœ์ˆ˜
            early_stopping=True,
            num_return_sequences=1,  # ํ•˜๋‚˜์˜ ๋‹ต๋ณ€๋งŒ ์ƒ์„ฑ
            do_sample=True,  # ๋‹ค์–‘ํ•œ ๋‹ต๋ณ€ ์ƒ์„ฑ์„ ์œ„ํ•ด ์ƒ˜ํ”Œ๋ง ํ™œ์„ฑํ™”
            eos_token_id=tokenizer.eos_token_id,  # EOS ํ† ํฐ ID ์‚ฌ์šฉ
            temperature=0.9,  # ์ƒ์„ฑ ๋‹ค์–‘์„ฑ ์กฐ์ ˆ์„ ์œ„ํ•œ ์˜จ๋„ ์„ค์ •
            top_p=0.8,  # nucleus sampling์—์„œ ์‚ฌ์šฉํ•  p ๊ฐ’
            top_k=50  # top-k sampling์—์„œ ์‚ฌ์šฉํ•  k ๊ฐ’
        )
    
        # ์ƒ์„ฑ๋œ ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋“œํ•˜์—ฌ ์ถœ๋ ฅ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜
        decoded = tokenizer.decode(gened[0], skip_special_tokens=True).strip()

        # "### ๋‹ต๋ณ€:" ๋ฌธ์ž์—ด ์ดํ›„์˜ ํ…์ŠคํŠธ๋งŒ ์ถ”์ถœ
        answer_start_idx = decoded.find("### ๋‹ต๋ณ€:") + len("### ๋‹ต๋ณ€:")
        complete_answer = decoded[answer_start_idx:].strip()

        # ์ฒซ ๋ฒˆ์งธ ๊ตฌ๋‘์ (. ? !)์„ ์ฐพ์•„์„œ ๊ทธ ๋ถ€๋ถ„๊นŒ์ง€๋งŒ ์ถ”์ถœ
        match = re.search(r"[\.\?\!][^\.\?\!]*$", complete_answer)
        if match:
            complete_answer = complete_answer[:match.end()].strip()
    
        return complete_answer



    
## Training Details


- We train our model with PEFT.
PEFT is a technique that does not tune all parameters of a model during fine-tuning, but only a small subset of parameters.
By tuning only a few parameters while leaving others fixed, the model is less likely to suffer from catastrophic forgetting, where the model forgets previously learned tasks when it learns new ones.
This significantly reduces computation and storage costs.
  
- We use QLora to train the base model.
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage. 
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.

- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face. 
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
In other words, instruction tuning involves fine-tuning a pre-trained model for a specific task or set of tasks, where the model is taught to follow specific instructions or guidelines.
Instruction tuning is a type of Supervised Fine-tuning (SFT) that aims to improve the generality and adaptability of a model by introducing an additional dimension that enables the model to understand and follow specific instructions.

 
### Training Data

1. ํ•œ๊ตญ์€ํ–‰: ๊ฒฝ์ œ๊ธˆ์œต์šฉ์–ด 700์„ (<https://www.bok.or.kr/portal/bbs/B0000249/view.do?nttId=235017&menuNo=200765>)
2. ๊ธˆ์œต๊ฐ๋…์›: ๊ธˆ์œต์†Œ๋น„์ž ์ •๋ณด ํฌํ„ธ ํŒŒ์ธ ๊ธˆ์œต์šฉ์–ด์‚ฌ์ „(<https://fine.fss.or.kr/fine/fnctip/fncDicary/list.do?menuNo=900021>)
3. KDI ๊ฒฝ์ œ์ •๋ณด์„ผํ„ฐ: ์‹œ์‚ฌ ์šฉ์–ด์‚ฌ์ „(<https://eiec.kdi.re.kr/material/wordDic.do>)
4. ํ•œ๊ตญ๊ฒฝ์ œ์‹ ๋ฌธ/ํ•œ๊ฒฝ๋‹ท์ปด: ํ•œ๊ฒฝ๊ฒฝ์ œ์šฉ์–ด์‚ฌ์ „(<https://terms.naver.com/list.naver?cid=42107&categoryId=42107>), ์˜ค๋Š˜์˜ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=1>), ์˜ค๋Š˜์˜ ์ฃผ๋‹ˆ์–ด TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=5>), ์ƒ๊ธ€์ƒ๊ธ€ํ•œ๊ฒฝ(<https://sgsg.hankyung.com/tesat/study>)
5. ์ค‘์†Œ๋ฒค์ฒ˜๊ธฐ์—…๋ถ€/๋Œ€ํ•œ๋ฏผ๊ตญ์ •๋ถ€: ์ค‘์†Œ๋ฒค์ฒ˜๊ธฐ์—…๋ถ€ ์ „๋ฌธ์šฉ์–ด(<https://terms.naver.com/list.naver?cid=42103&categoryId=42103>)
6. ๊ณ ์„ฑ์‚ผ/๋ฒ•๋ฌธ์ถœํŒ์‚ฌ: ํšŒ๊ณ„ยท์„ธ๋ฌด ์šฉ์–ด์‚ฌ์ „(<https://terms.naver.com/list.naver?cid=51737&categoryId=51737>)
7. ๋งจํ์˜ ๊ฒฝ์ œํ•™ 8ํŒ Word Index
8. kyujinpy/KOR-OpenOrca-Platypus-v3(<https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3>)


At the request of the original author, it is not to be used for commercial purposes. Therefore, it is licensed under the license CC-BY-NC-4.0.
The copyright of the data used belongs to the original author, so please contact the original author when using it.




### Training Hyperparameters

|Hyperparameter|SGEcon/polyglot-ko-5.8b_fin_v4|
|------|---|
|Lora Method|Lora|
|load in 4 bit|True|
|learning rate|3e-4|
|lora alpa|8|
|lora rank|16|
|lora dropout|0.05|
|optim|paged_adamw_8bit|
|target_modules|query_key_value|

   

### Example

> ์ค‘์•™์€ํ–‰์˜ ์—ญํ• ์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด์ค„๋ž˜?

>> ์ค‘์•™์€ํ–‰์€ ๊ตญ๊ฐ€์˜ ์‹ ์šฉ์„ ๊ด€๋ฆฌํ•˜๋Š” ์ค‘์š”ํ•œ ์—ญํ• ์„ ๋‹ด๋‹นํ•˜๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ๊ธˆ์œต์‹œ์žฅ์˜ ์•ˆ์ •์„ฑ์„ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ค‘์š”ํ•˜๋‹ค. ์ค‘์•™์€ํ–‰์€ ๊ธˆ์œต์‹œ์žฅ์ด ๋ถˆ์•ˆ์ •ํ•ด์งˆ ๊ฒฝ์šฐ ์œ ๋™์„ฑ์„ ๊ณต๊ธ‰ํ•˜๋Š” ๋“ฑ ๊ฒฝ์ œ๊ฐ€ ์•ˆ์ •์ ์œผ๋กœ ์šด์˜๋˜๋„๋ก ์ง€์›ํ•˜๋Š” ์—ญํ• ์„ ํ•œ๋‹ค. ์ด ๊ณผ์ •์—์„œ ํ†ตํ™”๋Ÿ‰์ด ์กฐ์ ˆ๋˜๊ธฐ๋„ ํ•œ๋‹ค. ํ•œ๊ตญ์€ํ–‰์˜ ๊ฒฝ์šฐ ์™ธํ™˜๋ณด์œ ์•ก์ด ์ผ์ • ์ˆ˜์ค€ ์ดํ•˜๋กœ ๋–จ์–ด์ง€๋ฉด ์™ธํ™˜์‹œ์žฅ์— ๊ฐœ์ž…ํ•ด ์™ธํ™˜๋ณด์œ ์•ก์„ ๋Š˜๋ฆฌ๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ์™ธํ™˜์‹œ์žฅ์˜ ์•ˆ์ •์„ฑ์„ ์œ ์ง€ํ•œ๋‹ค. ๋˜ ํ†ตํ™”๋Ÿ‰์ด ๋„ˆ๋ฌด ๋งŽ์„ ๊ฒฝ์šฐ์—๋Š” ๋ˆ์˜ ๊ฐ€์น˜๋ฅผ ๋‚ฎ์ถฐ์„œ ํ†ตํ™”๋Ÿ‰์„ ์กฐ์ ˆํ•œ๋‹ค. ์ด๋Ÿฐ ๋ฐฉ๋ฒ•์œผ๋กœ ๊ฒฝ๊ธฐ๋ฅผ ์กฐ์ ˆํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ํ•œ๊ตญ์€ํ–‰์˜ ์ •์ฑ…๊ธˆ๋ฆฌ๊ฐ€ ์˜ค๋ฅด๋ฉด ์‹œ์ค‘๊ธˆ๋ฆฌ๊ฐ€ ์˜ค๋ฅด๊ณ  ์ด๋Š” ๊ธฐ์—…์˜ ๊ธˆ์œต๋น„์šฉ์„ ์ฆ๊ฐ€์‹œ์ผœ ๊ฒฝ๊ธฐ๋ฅผ ์œ„์ถ•์‹œํ‚ค๊ธฐ ๋•Œ๋ฌธ์— ํ•œ๊ตญ์€ํ–‰์€ ์ •์ฑ…๊ธˆ๋ฆฌ๋ฅผ ์ธํ•˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ๊ฒฝ๊ธฐ๋ฅผ ๋ถ€์–‘ํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ๋˜ ์ค‘์•™์€ํ–‰์€ ๊ฒฝ์ œ๊ฐ€ ์–ด๋ ค์›Œ์ง€๋ฉด ๋ˆ์„ ํ’€์–ด์„œ ๊ฒฝ์ œ๋ฅผ ์‚ด๋ฆฌ๋Š” ์—ญํ• ์„ ํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ์ฆ‰, ์ค‘์•™์€ํ–‰์€ ๊ฒฝ๊ธฐ๊ฐ€ ์นจ์ฒด๋ผ ๋ˆ์ด ์ž˜ ๋Œ์ง€ ์•Š๋Š” ์ƒํ™ฉ์ด ๋˜๋ฉด ์‹œ์ค‘์— ๋ˆ์„ ํ’€์–ด์„œ ๊ฒฝ๊ธฐ๋ฅผ ๋ถ€์–‘ํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ๊ฒฝ์ œ๋ฅผ ์‚ด๋ฆฌ๋Š” ์—ญํ• ์„ ํ•œ๋‹ค.