Kanana
π€ Models | π Blog | π Technical Report | π» Github
Introduction
We introduce Kanana, a series of bilingual language models (developed by Kakao) that demonstrate exceeding performance in Korean and competitive performance in English. The computational cost of Kanana is significantly lower than that of state-of-the-art models of similar size. The report details the techniques employed during pre-training to achieve compute-efficient yet competitive models, including high-quality data filtering, staged pre-training, depth up-scaling, and pruning and distillation. Furthermore, the report outlines the methodologies utilized during the post-training of the Kanana models, encompassing supervised fine-tuning and preference optimization, aimed at enhancing their capability for seamless interaction with users. Lastly, the report elaborates on plausible approaches used for language model adaptation to specific scenarios, such as embedding, function calling, and Retrieval Augmented Generation (RAG). The Kanana model series spans from 2.1B to 32.5B parameters with 2.1B models (base, instruct, embedding, function call, and RAG) publicly released to promote research on Korean language models.
Neither the pre-training nor the post-training data includes Kakao user data.
Table of Contents
News
- π
2025/02/27
: Released Technical Report and π€HF model weights. - π
2025/01/10
: Published a blog post about the development ofKanana-Nano
model. (Kanana-Nano) - π
2024/11/14
: Published blog posts about the development ofKanana
models. (Kanana LLM: Pre-training, Kanana LLM: Post-training) - βΆοΈ
2024/11/06
: Published a presentation video about the development of theKanana
models. (if(kakaoAI)2024)
Performance
Below are partial report on the performance of the Kanana
model series. Please refer to the Technical Report for the full results.
Pre-trained Model Performance
Models | MMLU | KMMLU | HAERAE | HumanEval | MBPP | GSM8K | |
---|---|---|---|---|---|---|---|
27b+ scale | |||||||
Kanana-Flag-32.5b | 77.68 | 62.10 | 90.47 | 51.22 | 63.40 | 70.05 | |
Qwen2.5-32b | 83.10 | 63.15 | 75.16 | 50.00 | 73.40 | 82.41 | |
Gemma-2-27b | 75.45 | 51.16 | 69.11 | 51.22 | 64.60 | 74.37 | |
EXAONE-3.5-32b | 72.68 | 46.36 | 82.22 | - | - | - | |
Aya-Expanse-32b | 74.52 | 49.57 | 80.66 | - | - | - | |
7b+ scale | |||||||
Kanana-Essence-9.8b | 67.61 | 50.57 | 84.98 | 40.24 | 53.60 | 63.61 | |
Llama-3.1-8b | 65.18 | 41.02 | 61.78 | 35.37 | 48.60 | 50.87 | |
Qwen2.5-7b | 74.19 | 51.68 | 67.46 | 56.71 | 63.20 | 83.85 | |
Gemma-2-9b | 70.34 | 48.18 | 66.18 | 37.20 | 53.60 | 68.16 | |
EXAONE-3.5-7.8b | 65.36 | 45.30 | 77.54 | - | - | - | |
Aya-Expanse-8b | 62.52 | 40.11 | 71.95 | - | - | - | |
2b+ scale | |||||||
Kanana-Nano-2.1b | 54.83 | 44.80 | 77.09 | 31.10 | 46.20 | 46.32 | |
Llama-3.2-3b | 56.40 | 35.57 | 47.66 | 25.61 | 39.00 | 27.37 | |
Qwen2.5-3b | 65.57 | 45.28 | 61.32 | 37.80 | 55.60 | 69.07 | |
Gemma-2-2b | 52.89 | 30.67 | 45.55 | 20.12 | 28.20 | 24.72 | |
EXAONE-3.5-2.4b | 59.27 | 43.58 | 69.65 | - | - | - | |
70b+ scale | |||||||
Llama-3.1-70b | 78.93 | 53.00 | 76.35 | 57.32 | 66.60 | 81.73 | |
Qwen2.5-72b | 86.12 | 68.57 | 80.84 | 55.49 | 76.40 | 92.04 |
Post-trained Model Performance
Instruction-following Benchmarks
Models | MT-Bench | LogicKor | KoMT-Bench | WildBench | IFEval | ||
---|---|---|---|---|---|---|---|
27b+ scale | |||||||
Kanana-Flag-32.5b | 8.356 | 9.524 | 8.058 | 54.14 | 0.856 | ||
Qwen2.5-32b | 8.331 | 8.988 | 7.847 | 51.13 | 0.822 | ||
Gemma-2-27b | 8.088 | 8.869 | 7.373 | 46.46 | 0.817 | ||
EXAONE-3.5-32b | 8.375 | 9.202 | 7.907 | 54.30 | 0.845 | ||
Aya-Expanse-32b | 7.788 | 8.941 | 7.626 | 48.36 | 0.735 | ||
7b+ scale | |||||||
Kanana-Essence-9.8b | 7.769 | 8.964 | 7.706 | 47.27 | 0.799 | ||
Llama-3.1-8b | 7.500 | 6.512 | 5.336 | 33.20 | 0.772 | ||
Qwen2.5-7b | 7.625 | 7.952 | 6.808 | 41.31 | 0.760 | ||
Gemma-2-9b | 7.633 | 8.643 | 7.029 | 40.92 | 0.750 | ||
EXAONE-3.5-7.8b | 8.213 | 9.357 | 8.013 | 50.98 | 0.826 | ||
Aya-Expanse-8b | 7.131 | 8.357 | 7.006 | 38.50 | 0.645 | ||
2b+ scale | |||||||
Kanana-Nano-2.1b | 6.400 | 7.964 | 5.857 | 25.41 | 0.720 | ||
Llama-3.2-3b | 7.050 | 4.452 | 3.967 | 21.91 | 0.767 | ||
Qwen2.5-3b | 6.969 | 6.488 | 5.274 | 25.76 | 0.355 | ||
Gemma-2-2b | 7.225 | 5.917 | 4.835 | 28.71 | 0.428 | ||
EXAONE-3.5-2.4b | 7.919 | 8.941 | 7.223 | 41.68 | 0.790 | ||
70b+ scale | |||||||
Llama-3.1-70b | 8.275 | 8.250 | 6.970 | 46.50 | 0.875 | ||
Qwen2.5-72b | 8.619 | 9.214 | 8.281 | 55.25 | 0.861 |
General Benchmarks
Models | MMLU | KMMLU | HAE-RAE | HumanEval+ | MBPP+ | GSM8K | MATH |
---|---|---|---|---|---|---|---|
27b+ scale | |||||||
Kanana-Flag-32.5b | 81.08 | 64.19 | 68.18 | 77.44 | 69.84 | 90.83 | 57.82 |
Qwen2.5-32b | 84.40 | 59.37 | 48.30 | 82.32 | 71.96 | 95.30 | 81.90 |
Gemma-2-27b | 78.01 | 49.98 | 46.02 | 70.12 | 70.90 | 91.05 | 53.80 |
EXAONE-3.5-32b | 78.30 | 55.44 | 52.27 | 78.66 | 70.90 | 93.56 | 76.80 |
Aya-Expanse-32b | 74.49 | 42.35 | 51.14 | 64.63 | 65.61 | 75.06 | 42.82 |
7b+ scale | |||||||
Kanana-Essence-9.8b | 70.64 | 50.76 | 47.16 | 72.56 | 69.05 | 84.91 | 42.24 |
Llama-3.1-8b | 71.18 | 39.24 | 40.91 | 60.98 | 57.67 | 82.71 | 49.86 |
Qwen2.5-7b | 77.23 | 46.87 | 37.50 | 73.78 | 70.63 | 91.58 | 75.22 |
Gemma-2-9b | 73.47 | 44.47 | 39.77 | 59.76 | 64.55 | 87.72 | 48.10 |
EXAONE-3.5-7.8b | 72.62 | 52.09 | 46.02 | 79.27 | 66.67 | 89.99 | 73.50 |
Aya-Expanse-8b | 61.23 | 35.78 | 39.20 | 42.68 | 56.88 | 78.85 | 30.80 |
2b+ scale | |||||||
Kanana-Nano-2.1b | 52.48 | 38.51 | 33.52 | 63.41 | 62.43 | 72.32 | 29.26 |
Llama-3.2-3b | 56.09 | 3.07 | 17.05 | 56.71 | 50.26 | 66.57 | 38.18 |
Qwen2.5-3b | 69.18 | 38.33 | 32.39 | 67.68 | 64.02 | 84.00 | 65.72 |
Gemma-2-2b | 57.69 | 6.99 | 7.95 | 35.37 | 45.24 | 49.81 | 21.68 |
EXAONE-3.5-2.4b | 63.19 | 14.27 | 14.20 | 70.73 | 59.79 | 83.78 | 64.04 |
70b+ scale | |||||||
Llama-3.1-70b | 83.48 | 39.08 | 53.41 | 75.61 | 66.40 | 91.66 | 63.98 |
Qwen2.5-72b | 87.14 | 65.78 | 60.80 | 81.10 | 75.66 | 95.45 | 82.60 |
Embedding Model Performance
Backbone | Kanana-Nano-2.1b | Llama-3.2-3b | Qwen2.5-3b | Llama-3.2-1b | Qwen-2.5-1.5b |
English | 51.56 | 53.28 | 54.00 | 48.77 | 50.60 |
Korean | 65.00 | 59.43 | 62.10 | 54.68 | 54.60 |
Avg. | 58.28 | 56.35 | 58.05 | 51.73 | 52.60 |
Quickstart
π€ HuggingFace Transformers
transformers>=4.45.0
or the latest version is required to runKanana
model.
pip install transformers>=4.45.0
Example Usage for kanana-nano-2.1b-embedding
You need to install
datasets
viapip install datasets
before usingkanana-nano-2.1b-embedding
model.
import torch.nn.functional as F
from transformers import AutoModel
instruction = "Given a question, retrieve passages that answer the question"
queries = [
"are judo throws allowed in wrestling?",
"how to become a radiology technician in michigan?",
]
passages = [
"Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.",
"Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan.",
]
model = AutoModel.from_pretrained(
"kakaocorp/kanana-nano-2.1b-embedding",
trust_remote_code=True,
).to("cuda")
max_length = 512
query_embeddings = model.encode(queries, instruction=instruction, max_length=max_length)
passage_embeddings = model.encode(passages, instruction="", max_length=max_length)
# get the embeddings with DataLoader (spliting the datasets into multiple mini-batches)
# batch_size = 2
# query_embeddings = model._do_encode(queries, batch_size=batch_size, instruction=instruction, max_length=max_length)
# passage_embeddings = model._do_encode(passages, batch_size=batch_size, instruction="", max_length=max_length)
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1)
scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())
# Output:
# [[84.36527252197266, 31.752296447753906], [35.940425872802734, 81.82719421386719]]
License
The Kanana
models are licensed under CC-BY-NC-4.0.
Citation
@misc{kananallmteam2025kananacomputeefficientbilinguallanguage,
title={Kanana: Compute-efficient Bilingual Language Models},
author={Kanana LLM Team and Yunju Bak and Hojin Lee and Minho Ryu and Jiyeon Ham and Seungjae Jung and Daniel Wontae Nam and Taegyeong Eo and Donghun Lee and Doohae Jung and Boseop Kim and Nayeon Kim and Jaesun Park and Hyunho Kim and Hyunwoong Ko and Changmin Lee and Kyoung-Woon On and Seulye Baeg and Junrae Cho and Sunghee Jung and Jieun Kang and EungGyun Kim and Eunhwa Kim and Byeongil Ko and Daniel Lee and Minchul Lee and Miok Lee and Shinbok Lee and Gaeun Seo},
year={2025},
eprint={2502.18934},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.18934},
}
Contributors
- Pre-training: Yunju Bak, Doohae Jung, Boseop Kim, Nayeon Kim, Hojin Lee, Jaesun Park, Minho Ryu
- Post-training: Jiyeon Ham, Seungjae Jung, Hyunho Kim, Hyunwoong Ko, Changmin Lee, Daniel Wontae Nam, Kyoung-Woon On
- Adaptation: Seulye Baeg, Junrae Cho, Taegyeong Eo, Sunghee Jung, Jieun Kang, EungGyun Kim, Eunhwa Kim, Byeongil Ko, Daniel Lee, Donghun Lee, Minchul Lee, Miok Lee, Shinbok Lee, Minho Ryu, Gaeun Seo
Contact
- Kanana LLM Team Technical Support: [email protected]
- Business & Partnership Contact: [email protected]
- Downloads last month
- 209