metadata
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
- en
- ko
tags:
- meta
- llama
- llama-3
- akallama
library_name: transformers
AKALLAMA
We introduce AKALLAMA-70B, korean focused opensource 70b large language model. It demonstrate considerable improvement in korean fluence, specially compared to base llama 3 model. To our knowledge, this is one of the first 70b opensource Korean-speaking language models.
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- Developed by: mirlab
- Language(s) (NLP): Korean, English
- License: llama3
- Finetuned from model: meta-llama/Meta-Llama-3-70B
Evaluation
For local inferencing and evaluation, we highly recommend using the Ollama library. Check Customize a model section of Ollama github page
Training Details
Training Procedure
We closely followed training procedure of Zephyr ORPO model. Please check out Huggingface's alignment handbook for further details, including the chat template.
Training Data
Detailed descriptions regarding training data will be announced later.
Examples
Thanks to
- A100 클러스터를 제공해주신, 연세대학교 인공지능학과 데이터센터