SGEcon's picture
Update README.md
e38792e verified
|
raw
history blame
6.07 kB
metadata
library_name: transformers
license: cc-by-nc-4.0
datasets:
  - kyujinpy/KOR-OpenOrca-Platypus-v3
language:
  - ko
  - en
tags:
  - Economic
  - Finance
base_model: davidkim205/komt-mistral-7b-v1

Model Details

Model Developers: Sogang University SGEconFinlab(<https://sc.sogang.ac.kr/aifinlab/)

Model Description

This model is a language model specialized in economics and finance. This was learned with various economic/finance-related data. The data sources are listed below, and we are not releasing the data that we trained on because it was used for research/policy purposes. If you wish to use the original data, please contact the original author directly for permission to use it.

Loading the Model

peft_model_id = "SGEcon/komt-mistral-7b-v1_fin_v5"
config = PeftConfig.from_pretrained(peft_model_id)
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map={"":0})
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model.eval()
streamer = TextStreamer(tokenizer)

Conducting Conversation

def gen(x):
    generation_config = GenerationConfig(
        temperature=0.9,
        top_p=0.8,
        top_k=50,
        max_new_tokens=256,
        early_stopping=True,
        do_sample=True,
    )
    q = f"[INST]{x} [/INST]"
    gened = model.generate(
        **tokenizer(
            q,
            return_tensors='pt',
            return_token_type_ids=False
        ).to('cuda'),
        generation_config=generation_config,
        pad_token_id=tokenizer.eos_token_id,
        eos_token_id=tokenizer.eos_token_id,
        streamer=streamer,
    )
    result_str = tokenizer.decode(gened[0])

    # μž…λ ₯ 질문과 "[INST]" 및 "[/INST]" νƒœκ·Έ 제거
    input_question_with_tags = f"[INST]{x} [/INST]"
    result_str = result_str.replace(input_question_with_tags, "").strip()

    # "<s>" 및 "</s>" νƒœκ·Έ 제거
    result_str = result_str.replace("<s>", "").replace("</s>", "").strip()

    return result_str

Training Details

  • We use QLora to train the base model. Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage. The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.

  • Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face. Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.

Training Data

  1. ν•œκ΅­μ€ν–‰: κ²½μ œκΈˆμœ΅μš©μ–΄ 700μ„ (https://www.bok.or.kr/portal/bbs/B0000249/view.do?nttId=235017&menuNo=200765)
  2. κΈˆμœ΅κ°λ…μ›: κΈˆμœ΅μ†ŒλΉ„μž 정보 포털 파인 κΈˆμœ΅μš©μ–΄μ‚¬μ „(https://fine.fss.or.kr/fine/fnctip/fncDicary/list.do?menuNo=900021)
  3. KDI κ²½μ œμ •λ³΄μ„Όν„°: μ‹œμ‚¬ μš©μ–΄μ‚¬μ „(https://eiec.kdi.re.kr/material/wordDic.do)
  4. ν•œκ΅­κ²½μ œμ‹ λ¬Έ/ν•œκ²½λ‹·μ»΄: ν•œκ²½κ²½μ œμš©μ–΄μ‚¬μ „(https://terms.naver.com/list.naver?cid=42107&categoryId=42107), 였늘의 TESAT(https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=1), 였늘의 μ£Όλ‹ˆμ–΄ TESAT(https://www.tesat.or.kr/bbs.frm.list/tesat_study?s_cateno=5), μƒκΈ€μƒκΈ€ν•œκ²½(https://sgsg.hankyung.com/tesat/study)
  5. μ€‘μ†Œλ²€μ²˜κΈ°μ—…λΆ€/λŒ€ν•œλ―Όκ΅­μ •λΆ€: μ€‘μ†Œλ²€μ²˜κΈ°μ—…λΆ€ μ „λ¬Έμš©μ–΄(https://terms.naver.com/list.naver?cid=42103&categoryId=42103)
  6. κ³ μ„±μ‚Ό/λ²•λ¬ΈμΆœνŒμ‚¬: νšŒκ³„Β·μ„Έλ¬΄ μš©μ–΄μ‚¬μ „(https://terms.naver.com/list.naver?cid=51737&categoryId=51737)
  7. 맨큐의 κ²½μ œν•™ 8판 Word Index
  8. kyujinpy/KOR-OpenOrca-Platypus-v3(https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)

At the request of the original author, it is not to be used for commercial purposes. Therefore, it is licensed under the license CC-BY-NC-4.0. The copyright of the data used belongs to the original author, so please contact the original author when using it.

Training Hyperparameters

Hyperparameter SGEcon/komt-mistral-7b-v1_fin_v5
Lora Method Lora
load in 4 bit True
learning rate 1e-6
lora alpa 8
lora rank 32
lora dropout 0.05
optim adamw_torch
target_modules o_proj, q_proj, up_proj, down_proj, gate_proj, k_proj, v_proj, lm_head

License

The language identification model is distributed under the Creative Commons Attribution-NonCommercial 4.0 International Public License.

Example

μ€‘μ•™μ€ν–‰μ˜ 역할에 λŒ€ν•΄μ„œ μ„€λͺ…ν•΄μ€„λž˜?

쀑앙은행은 κ΅­κ°€ 경제의 μ•ˆμ •μ„ μœ μ§€ν•˜κΈ° μœ„ν•΄ κ΅­κ°€μ˜ 톡화 λ°œν–‰, 은행 업무 감독, λŒ€μΆœ 쑰절 λ“±μ˜ μ€‘μš”ν•œ 역할을 μˆ˜ν–‰ν•˜λŠ” 금육 기관이닀. 쀑앙은행은 κ΅­κ°€μ˜ 톡화 λ°œν–‰ μ‘°μ ˆμ„ 톡해 λ¬Όκ°€ μƒμŠΉμ„ μ–΅μ œν•˜κ³ , 이λ₯Ό 톡해 가격 μ•ˆμ •μ„±μ„ μœ μ§€ν•˜κ³ μž ν•œλ‹€. λ˜ν•œ, 쀑앙은행은 λŒ€μΆœ μ‘°μ ˆμ„ 톡해 금리λ₯Ό μ‘°μ ˆν•˜μ—¬ 자금 쑰달 μ‹œμž₯에 μ μ ˆν•œ 금리 μˆ˜μ€€μ„ μœ μ§€ν•˜κ³ , 이λ₯Ό 톡해 경제 ν™œλ™μ„ 적절히 μ‘°μ ˆν•  수 μžˆλ‹€.