File size: 7,681 Bytes
c100893
 
c6b0748
 
 
 
 
 
 
 
 
ad2fbdd
c100893
 
 
96dfd64
63bab77
c100893
758d33d
96dfd64
c100893
1ffd489
acfc617
 
c100893
758d33d
acfc617
6bb0093
c100893
 
d2b1968
b13807a
59d887f
0dca21d
 
 
 
 
 
 
 
 
 
 
1ffd489
d2b1968
63bab77
0dca21d
c6b0748
0dca21d
 
1ffd489
10adf94
0dca21d
 
 
 
10adf94
0dca21d
10adf94
 
 
 
 
 
0dca21d
1ffd489
96dfd64
c6b0748
 
96dfd64
c6b0748
 
 
96dfd64
c6b0748
 
 
1ffd489
c6b0748
 
1ffd489
6bb0093
1ffd489
c100893
0e5835a
455e8c7
 
 
 
 
459f367
0e5835a
 
 
459f367
0e5835a
455e8c7
03eaa5b
0e5835a
c100893
63bab77
96dfd64
c100893
758d33d
 
 
 
 
 
 
0e5835a
c100893
 
4f207c8
0e5835a
c100893
 
 
0e5835a
96dfd64
c100893
63bab77
6bb0093
63bab77
 
 
 
 
 
 
 
6bb0093
c100893
96dfd64
0e5835a
96dfd64
 
 
 
c100893
96dfd64
f0e5c4b
 
c100893
0e5835a
c100893
f0e5c4b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
library_name: transformers
license: cc-by-nc-4.0
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
language:
- ko
- en
tags:
- Economic
- Finance
base_model: yanolja/KoSOLAR-10.7B-v0.2
---


## Model Details
Model Developers: Sogang University SGEconFinlab(<<https://sc.sogang.ac.kr/aifinlab/>)


## Model Description

This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data.
The data sources are listed below, and we are not releasing the data that we trained on because it was used for research/policy purposes. 
If you wish to use the original data, please contact the original author directly for permission to use it.

- **Developed by:** Sogang University SGEconFinlab(<https://sc.sogang.ac.kr/aifinlab/>)
- **License:** cc-by-nc-4.0
- **Base Model:** yanolja/KoSOLAR-10.7B-v0.2(<https://huggingface.co./yanolja/KoSOLAR-10.7B-v0.2>)


## Loading the Model

    peft_model_id = "SGEcon/EconFinKoSOLAR-10.7B_SFT"
    config = PeftConfig.from_pretrained(peft_model_id)
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=torch.bfloat16
    )
    model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0})
    model = PeftModel.from_pretrained(model, peft_model_id)
    tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
    model.eval()

## Conducting Conversation

    import re

    def gen(x):
        inputs = tokenizer(f"### 질문: {x}\n\n### λ‹΅λ³€:", return_tensors='pt', return_token_type_ids=False)
    
        # Move data to GPU (if available)
        inputs = {k: v.to(device="cuda" if torch.cuda.is_available() else "cpu") for k, v in inputs.items()}

        gened = model.generate(
            **inputs,
            max_new_tokens=256,  # Maximum number of new tokens to create
            early_stopping=True,
            num_return_sequences=1,  # Generate only one answer
            do_sample=True,  # Enable sampling to generate a variety of answers
            eos_token_id=tokenizer.eos_token_id,  # Using EOS Token IDs 
            temperature=0.9,  # This option is adjustable.
            top_p=0.8,  # This option is adjustable.
            top_k=100  # This option is adjustable.
        )
    
        # Decode the generated sequence and convert it to output text 
        decoded = tokenizer.decode(gened[0], skip_special_tokens=True).strip()

        # Extract only text after a string "### λ‹΅λ³€:" 
        answer_start_idx = decoded.find("### λ‹΅λ³€:") + len("### λ‹΅λ³€:")
        complete_answer = decoded[answer_start_idx:].strip()

        # Find the first punctuation mark (. ? !) and extract only up to it
        match = re.search(r"[\.\?\!][^\.\?\!]*$", complete_answer)
        if match:
            complete_answer = complete_answer[:match.end()].strip()
    
        return complete_answer



    
## Training Details

- We train our model with PEFT.
PEFT is a technique that does not tune all parameters of a model during fine-tuning, but only a small subset of parameters.
By tuning only a few parameters while leaving others fixed, the model is less likely to suffer from catastrophic forgetting, where the model forgets previously learned tasks when it learns new ones.
This significantly reduces computation and storage costs.
  
- We use QLora to train the base model.
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage. 
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.

- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face. 
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
In other words, instruction tuning involves fine-tuning a pre-trained model for a specific task or set of tasks, where the model is taught to follow specific instructions or guidelines.
Instruction tuning is a type of Supervised Fine-tuning (SFT) that aims to improve the generality and adaptability of a model by introducing an additional dimension that enables the model to understand and follow specific instructions.


 
## Training Data

1. ν•œκ΅­μ€ν–‰: κ²½μ œκΈˆμœ΅μš©μ–΄ 700μ„ (<https://www.bok.or.kr/portal/bbs/B0000249/view.do?nttId=235017&menuNo=200765>)
2. κΈˆμœ΅κ°λ…μ›: κΈˆμœ΅μ†ŒλΉ„μž 정보 포털 파인 κΈˆμœ΅μš©μ–΄μ‚¬μ „(<https://fine.fss.or.kr/fine/fnctip/fncDicary/list.do?menuNo=900021>)
3. KDI κ²½μ œμ •λ³΄μ„Όν„°: μ‹œμ‚¬ μš©μ–΄μ‚¬μ „(<https://eiec.kdi.re.kr/material/wordDic.do>)
4. ν•œκ΅­κ²½μ œμ‹ λ¬Έ/ν•œκ²½λ‹·μ»΄: ν•œκ²½κ²½μ œμš©μ–΄μ‚¬μ „(<https://terms.naver.com/list.naver?cid=42107&categoryId=42107>), 였늘의 TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=1>), 였늘의 μ£Όλ‹ˆμ–΄ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=5>), μƒκΈ€μƒκΈ€ν•œκ²½(<https://sgsg.hankyung.com/tesat/study>)
5. μ€‘μ†Œλ²€μ²˜κΈ°μ—…λΆ€/λŒ€ν•œλ―Όκ΅­μ •λΆ€: μ€‘μ†Œλ²€μ²˜κΈ°μ—…λΆ€ μ „λ¬Έμš©μ–΄(<https://terms.naver.com/list.naver?cid=42103&categoryId=42103>)
6. κ³ μ„±μ‚Ό/λ²•λ¬ΈμΆœνŒμ‚¬: νšŒκ³„Β·μ„Έλ¬΄ μš©μ–΄μ‚¬μ „(<https://terms.naver.com/list.naver?cid=51737&categoryId=51737>)
7. 맨큐의 κ²½μ œν•™ 8판 Word Index
8. kyujinpy/KOR-OpenOrca-Platypus-v3(<https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3>)


At the request of the original author, it is not to be used for commercial purposes. Therefore, it is licensed under the license CC-BY-NC-4.0.
The copyright of the data used belongs to the original author, so please contact the original author when using it.




## Training Hyperparameters

|Hyperparameter|SGEcon/KoSOLAR-10.7B-v0.2_fin_v4|
|------|---|
|Lora Method|Lora|
|load in 4 bit|True|
|learning rate|1e-5|
|lr scheduler|linear|
|lora alpa|16|
|lora rank|16|
|lora dropout|0.05|
|optim|paged_adamw_32bit|
|target_modules|q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head|


   
## License
The language identification model is distributed under the Creative Commons Attribution-NonCommercial 4.0 International Public License.



## Example
We only removed duplicate sentences.


> μ€‘μ•™μ€ν–‰μ˜ 역할에 λŒ€ν•΄μ„œ μ„€λͺ…ν•΄μ€„λž˜?

>> 쀑앙은행은 κ΅­κ°€κ°€ λ°œν–‰ν•˜λŠ” ν™”νμ˜ μœ ν†΅κ³Ό 경제의 μ•ˆμ •μ  μš΄μ˜μ„ μœ„ν•΄ μ„€λ¦½λœ κ΅­κ°€κΈ°κ΄€μž…λ‹ˆλ‹€. 쀑앙은행은 β‘  경제의 μ•ˆμ •μ  μš΄μ˜μ„ μœ„ν•΄ κΈˆμœ΅κΈ°κ΄€μ„ κ°λ…ν•˜κ³ , ν™”νμ˜ λ°œν–‰κ³Ό 톡화정책을 μ‹€μ‹œν•˜λŠ” κΈ°λŠ₯을 ν•©λ‹ˆλ‹€. β‘‘ ν†΅ν™”μ •μ±…μ˜ μ‹€ν–‰μ—λŠ” 쀑앙은행이 μœ λ™μ„±(Liquidity)을 κ³΅κΈ‰ν•˜κ³ , μ΄μžμœ¨μ„ μ‘°μ •ν•˜μ—¬ μˆ˜μš”μ™€ 곡급을 μ‘°μ ˆν•˜λŠ” 일, μ˜ˆκΈˆμ€€λΉ„κΈˆμ„ μœ μ§€ν•˜λŠ” 일, ν™˜μœ¨μ‘°μ ˆ 등이 ν¬ν•¨λ©λ‹ˆλ‹€. β‘’ 쀑앙은행은 κΈˆμœ΅κΈ°κ΄€μ˜ λŒ€μΆœν•œλ„ μ΄ˆκ³ΌλŒ€μΆœ, μ™Έν™˜κ±°λž˜, μ§€νμ˜ λ°œν–‰κ³Ό κ΅ν™˜μ„ κ΄€λ¦¬ν•˜λŠ” 업무도 λ‹΄λ‹Ήν•©λ‹ˆλ‹€. β‘£ ν†΅ν™”μ •μ±…μ˜ μ‹€μ‹œλŠ” 화폐와 μ‹ μš©μ˜ λ°œν–‰ 및 곡급을 ν†΅μ œλ₯Ό 톡해 λ¬Όκ°€μ•ˆμ •κ³Ό κ³ μš©μ„ μ¦λŒ€ν•˜κ³  경제의 κ· ν˜•μ„±μž₯을 도λͺ¨ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.