(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다
The license is cc-by-nc-sa-4.0
.
🐳KOR-Orca-Platypus-13B🐳
Model Details
Model Developers Kyujin Han (kyujinpy)
Input Models input text only.
Output Models generate text only.
Model Architecture
Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
Repo Link
Github Korean-OpenOrca: 🐳Korean-OpenOrca🐳
Base Model hyunseoki/ko-en-llama2-13b
Training Dataset
I use kyujinpy/KOR-OpenOrca-Platypus-v3(private! wait!).
I use A100 GPU 40GB and COLAB, when trianing.
Model Benchmark
KO-LLM leaderboard
- Follow up as Open KO-LLM LeaderBoard.
Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|---|
[KOR-Orca-Platypus-13B🐳] | 46.59 | 42.06 | 53.95 | 42.28 | 43.55 | 51.12 |
KOR-Orca-Platypus-13B🐳-v2 | 49.48 | 44.03 | 54.43 | 42.23 | 41.64 | 65.05 |
Compare with Top 4 SOTA models. (update: 10/09)
Implementation Code
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KOR-Orca-Platypus-13B-v2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 3,691
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.