Edit model card

(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다
The license is cc-by-nc-sa-4.0.

CoT-llama2-7B

img

More detail repo(Github): CoT-llama2

Model Details

Model Developers Kyujin Han (kyujinpy)

Input Models input text only.

Output Models generate text only.

Model Architecture

CoT-llama2 is an auto-regressive language model based on the LLaMA2 transformer architecture.

Base Model Llama-2-ko-7b

Training Dataset

I use KoCoT_2000.
Using DeepL, translate about kaist-CoT.

I use A100 GPU 40GB and COLAB, when trianing.

Training Hyperparameters

Hyperparameters Value
batch_size 64
micro_batch_size 1
Epochs 15
learning_rate 1e-5
cutoff_len 2048
lr_scheduler linear
base_model beomi/llama-2-ko-7b

Model Benchmark

LM Eval Harness - Korean (polyglot branch)

Question Answering (QA)

COPA (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.7196 0.7193 0.7204 0.7206
Polyglot-ko-3.8b 0.7595 0.7608 0.7638 0.7788
Polyglot-ko-5.8b 0.7745 0.7676 0.7775 0.7887
Polyglot-ko-12.8b 0.7937 0.8108 0.8037 0.8369
Llama-2-Ko-7b 20B 0.7388 0.7626 0.7808 0.7979
Llama-2-Ko-7b 40B 0.7436 0.7927 0.8037 0.8259
KO-platypus2-7B-EX 0.7509 0.7899 0.8029 0.8290
CoT-llama2-7B(ours) 0.7528 0.7888 0.7998 0.8210

Natural Language Inference (NLI; 자연어 추론 평가)

HellaSwag (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.5247 0.5260 0.5278 0.5427
Polyglot-ko-3.8b 0.5707 0.5830 0.5670 0.5787
Polyglot-ko-5.8b 0.5976 0.5998 0.5979 0.6208
Polyglot-ko-12.8b 0.5954 0.6306 0.6098 0.6118
Llama-2-Ko-7b 20B 0.4518 0.4668 0.4726 0.4828
Llama-2-Ko-7b 40B 0.4562 0.4657 0.4698 0.4774
KO-platypus2-7B-EX 0.4571 0.4461 0.4371 0.4525
CoT-llama2-7B(ours) 0.4543 0.4554 0.4606 0.4579

Question Answering (QA)

BoolQ (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.3552 0.4751 0.4109 0.4038
Polyglot-ko-3.8b 0.4320 0.5263 0.4930 0.4038
Polyglot-ko-5.8b 0.4356 0.5698 0.5187 0.5236
Polyglot-ko-12.8b 0.4818 0.6041 0.6289 0.6448
Llama-2-Ko-7b 20B 0.3607 0.6797 0.6801 0.6622
Llama-2-Ko-7b 40B 0.5786 0.6977 0.7084 0.7144
KO-platypus2-7B-EX 0.6028 0.6979 0.7016 0.6988
CoT-llama2-7B(ours) 0.5852 0.6947 0.7059 0.7213

Classification

SentiNeg (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.6790 0.6257 0.5514 0.7851
Polyglot-ko-3.8b 0.4858 0.7950 0.7320 0.7851
Polyglot-ko-5.8b 0.3394 0.8841 0.8808 0.9521
Polyglot-ko-12.8b 0.9117 0.9015 0.9345 0.9723
Llama-2-Ko-7b 20B 0.4855 0.8295 0.8711 0.8513
Llama-2-Ko-7b 40B 0.4594 0.7611 0.7276 0.9370
KO-platypus2-7B-EX 0.5821 0.7653 0.7991 0.8643
CoT-llama2-7B(ours) 0.5045 0.8054 0.7942 0.9446

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/CoT-llama-2k-7b"
CoT-llama = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)

Readme format: beomi/llama-2-ko-7b


Downloads last month
4,321
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train kyujinpy/CoT-llama-2k-7b