Abstract
We introduce Kanana, a series of bilingual language models that demonstrate exceeding performance in Korean and competitive performance in English. The computational cost of Kanana is significantly lower than that of state-of-the-art models of similar size. The report details the techniques employed during pre-training to achieve compute-efficient yet competitive models, including high quality data filtering, staged pre-training, depth up-scaling, and pruning and distillation. Furthermore, the report outlines the methodologies utilized during the post-training of the Kanana models, encompassing supervised fine-tuning and preference optimization, aimed at enhancing their capability for seamless interaction with users. Lastly, the report elaborates on plausible approaches used for language model adaptation to specific scenarios, such as embedding, retrieval augmented generation, and function calling. The Kanana model series spans from 2.1B to 32.5B parameters with 2.1B models (base, instruct, embedding) publicly released to promote research on Korean language models.
Community
Kakao just released Kanana LLM technical paper and 2.1B model series (base, instruct and embedding)!
models: https://huggingface.co./collections/kakaocorp/kanana-nano-21b-67a326cda1c449c8d4172259
github: https://github.com/kakao/kanana
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning (2025)
- UrduLLaMA 1.0: Dataset Curation, Preprocessing, and Evaluation in Low-Resource Settings (2025)
- From Drafts to Answers: Unlocking LLM Potential via Aggregation Fine-Tuning (2025)
- Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging -- An Open Recipe (2025)
- Multilingual Language Model Pretraining using Machine-translated Data (2025)
- The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama with Vision-Aware and Function-Calling Capabilities (2025)
- Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper