|
--- |
|
library_name: transformers |
|
license: cc-by-nc-4.0 |
|
datasets: |
|
- kyujinpy/KOR-OpenOrca-Platypus-v3 |
|
language: |
|
- ko |
|
- en |
|
tags: |
|
- Economic |
|
- Finance |
|
base_model: yanolja/KoSOLAR-10.7B-v0.2 |
|
--- |
|
|
|
|
|
## Model Details |
|
Model Developers: Sogang University SGEconFinlab(<<https://sc.sogang.ac.kr/aifinlab/>) |
|
|
|
|
|
## Model Description |
|
|
|
This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data. |
|
The data sources are listed below, and we are not releasing the data that we trained on because it was used for research/policy purposes. |
|
If you wish to use the original data, please contact the original author directly for permission to use it. |
|
|
|
- **Developed by:** Sogang University SGEconFinlab(<https://sc.sogang.ac.kr/aifinlab/>) |
|
- **License:** cc-by-nc-4.0 |
|
- **Base Model:** yanolja/KoSOLAR-10.7B-v0.2(<https://huggingface.co./yanolja/KoSOLAR-10.7B-v0.2>) |
|
|
|
|
|
## Loading the Model |
|
|
|
peft_model_id = "SGEcon/EconFinKoSOLAR-10.7B_SFT" |
|
config = PeftConfig.from_pretrained(peft_model_id) |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch.bfloat16 |
|
) |
|
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0}) |
|
model = PeftModel.from_pretrained(model, peft_model_id) |
|
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) |
|
model.eval() |
|
|
|
## Conducting Conversation |
|
|
|
import re |
|
|
|
def gen(x): |
|
inputs = tokenizer(f"### μ§λ¬Έ: {x}\n\n### λ΅λ³:", return_tensors='pt', return_token_type_ids=False) |
|
|
|
# Move data to GPU (if available) |
|
inputs = {k: v.to(device="cuda" if torch.cuda.is_available() else "cpu") for k, v in inputs.items()} |
|
|
|
gened = model.generate( |
|
**inputs, |
|
max_new_tokens=256, # Maximum number of new tokens to create |
|
early_stopping=True, |
|
num_return_sequences=1, # Generate only one answer |
|
do_sample=True, # Enable sampling to generate a variety of answers |
|
eos_token_id=tokenizer.eos_token_id, # Using EOS Token IDs |
|
temperature=0.9, # This option is adjustable. |
|
top_p=0.8, # This option is adjustable. |
|
top_k=100 # This option is adjustable. |
|
) |
|
|
|
# Decode the generated sequence and convert it to output text |
|
decoded = tokenizer.decode(gened[0], skip_special_tokens=True).strip() |
|
|
|
# Extract only text after a string "### λ΅λ³:" |
|
answer_start_idx = decoded.find("### λ΅λ³:") + len("### λ΅λ³:") |
|
complete_answer = decoded[answer_start_idx:].strip() |
|
|
|
# Find the first punctuation mark (. ? !) and extract only up to it |
|
match = re.search(r"[\.\?\!][^\.\?\!]*$", complete_answer) |
|
if match: |
|
complete_answer = complete_answer[:match.end()].strip() |
|
|
|
return complete_answer |
|
|
|
|
|
|
|
|
|
## Training Details |
|
|
|
- We train our model with PEFT. |
|
PEFT is a technique that does not tune all parameters of a model during fine-tuning, but only a small subset of parameters. |
|
By tuning only a few parameters while leaving others fixed, the model is less likely to suffer from catastrophic forgetting, where the model forgets previously learned tasks when it learns new ones. |
|
This significantly reduces computation and storage costs. |
|
|
|
- We use QLora to train the base model. |
|
Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage. |
|
The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance. |
|
|
|
- Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face. |
|
Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair. |
|
In other words, instruction tuning involves fine-tuning a pre-trained model for a specific task or set of tasks, where the model is taught to follow specific instructions or guidelines. |
|
Instruction tuning is a type of Supervised Fine-tuning (SFT) that aims to improve the generality and adaptability of a model by introducing an additional dimension that enables the model to understand and follow specific instructions. |
|
|
|
|
|
|
|
## Training Data |
|
|
|
1. νκ΅μν: κ²½μ κΈμ΅μ©μ΄ 700μ (<https://www.bok.or.kr/portal/bbs/B0000249/view.do?nttId=235017&menuNo=200765>) |
|
2. κΈμ΅κ°λ
μ: κΈμ΅μλΉμ μ 보 ν¬νΈ νμΈ κΈμ΅μ©μ΄μ¬μ (<https://fine.fss.or.kr/fine/fnctip/fncDicary/list.do?menuNo=900021>) |
|
3. KDI κ²½μ μ 보μΌν°: μμ¬ μ©μ΄μ¬μ (<https://eiec.kdi.re.kr/material/wordDic.do>) |
|
4. νκ΅κ²½μ μ λ¬Έ/νκ²½λ·μ»΄: νκ²½κ²½μ μ©μ΄μ¬μ (<https://terms.naver.com/list.naver?cid=42107&categoryId=42107>), μ€λμ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=1>), μ€λμ μ£Όλμ΄ TESAT(<https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=5>), μκΈμκΈνκ²½(<https://sgsg.hankyung.com/tesat/study>) |
|
5. μ€μλ²€μ²κΈ°μ
λΆ/λνλ―Όκ΅μ λΆ: μ€μλ²€μ²κΈ°μ
λΆ μ λ¬Έμ©μ΄(<https://terms.naver.com/list.naver?cid=42103&categoryId=42103>) |
|
6. κ³ μ±μΌ/λ²λ¬ΈμΆνμ¬: νκ³Β·μΈλ¬΄ μ©μ΄μ¬μ (<https://terms.naver.com/list.naver?cid=51737&categoryId=51737>) |
|
7. 맨νμ κ²½μ ν 8ν Word Index |
|
8. kyujinpy/KOR-OpenOrca-Platypus-v3(<https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3>) |
|
|
|
|
|
At the request of the original author, it is not to be used for commercial purposes. Therefore, it is licensed under the license CC-BY-NC-4.0. |
|
The copyright of the data used belongs to the original author, so please contact the original author when using it. |
|
|
|
|
|
|
|
|
|
## Training Hyperparameters |
|
|
|
|Hyperparameter|SGEcon/KoSOLAR-10.7B-v0.2_fin_v4| |
|
|------|---| |
|
|Lora Method|Lora| |
|
|load in 4 bit|True| |
|
|learning rate|1e-5| |
|
|lr scheduler|linear| |
|
|lora alpa|16| |
|
|lora rank|16| |
|
|lora dropout|0.05| |
|
|optim|paged_adamw_32bit| |
|
|target_modules|q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head| |
|
|
|
|
|
|
|
## License |
|
The language identification model is distributed under the Creative Commons Attribution-NonCommercial 4.0 International Public License. |
|
|
|
|
|
|
|
## Example |
|
We only removed duplicate sentences. |
|
|
|
|
|
> μ€μμνμ μν μ λν΄μ μ€λͺ
ν΄μ€λ? |
|
|
|
>> μ€μμνμ κ΅κ°κ° λ°ννλ ννμ μ ν΅κ³Ό κ²½μ μ μμ μ μ΄μμ μν΄ μ€λ¦½λ κ΅κ°κΈ°κ΄μ
λλ€. μ€μμνμ β κ²½μ μ μμ μ μ΄μμ μν΄ κΈμ΅κΈ°κ΄μ κ°λ
νκ³ , ννμ λ°νκ³Ό ν΅νμ μ±
μ μ€μνλ κΈ°λ₯μ ν©λλ€. β‘ ν΅νμ μ±
μ μ€νμλ μ€μμνμ΄ μ λμ±(Liquidity)μ 곡κΈνκ³ , μ΄μμ¨μ μ‘°μ νμ¬ μμμ 곡κΈμ μ‘°μ νλ μΌ, μκΈμ€λΉκΈμ μ μ§νλ μΌ, νμ¨μ‘°μ λ±μ΄ ν¬ν¨λ©λλ€. β’ μ€μμνμ κΈμ΅κΈ°κ΄μ λμΆνλ μ΄κ³ΌλμΆ, μΈνκ±°λ, μ§νμ λ°νκ³Ό κ΅νμ κ΄λ¦¬νλ μ
무λ λ΄λΉν©λλ€. β£ ν΅νμ μ±
μ μ€μλ ννμ μ μ©μ λ°ν λ° κ³΅κΈμ ν΅μ λ₯Ό ν΅ν΄ λ¬Όκ°μμ κ³Ό κ³ μ©μ μ¦λνκ³ κ²½μ μ κ· νμ±μ₯μ λλͺ¨νλ κ²μ λͺ©νλ‘ νκ³ μμ΅λλ€. |