A newer version of this model is available: CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct-2412

image/webp

Model Description

Model Details

  • Name: Carrot Llama-3.2 Rabbit Ko
  • Version: 3B Instruct
  • Base Model: CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct
  • Languages: Korean, English
  • Model Type: Large Language Model (Instruction-tuned)

Training Process

๋ณธ ๋ชจ๋ธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ฃผ์š” ํ›ˆ๋ จ ๋‹จ๊ณ„๋ฅผ ๊ฑฐ์ณค์Šต๋‹ˆ๋‹ค:

  1. SFT (Supervised Fine-Tuning)
    • ๊ณ ํ’ˆ์งˆ ํ•œ๊ตญ์–ด ๋ฐ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ์„ธ๋ถ€ ์กฐ์ •

Limitations

  • 3B ํŒŒ๋ผ๋ฏธํ„ฐ ๊ทœ๋ชจ๋กœ ์ธํ•œ ๋ณต์žกํ•œ ์ž‘์—…์—์„œ์˜ ์ œํ•œ์  ์„ฑ๋Šฅ
  • ํŠน์ • ๋„๋ฉ”์ธ์— ๋Œ€ํ•œ ๊นŠ์ด ์žˆ๋Š” ์ „๋ฌธ์„ฑ ๋ถ€์กฑ
  • ํŽธํ–ฅ์„ฑ ๋ฐ ํ™˜๊ฐ ๊ฐ€๋Šฅ์„ฑ

Ethics Statement

๋ชจ๋ธ ๊ฐœ๋ฐœ ๊ณผ์ •์—์„œ ์œค๋ฆฌ์  ๊ณ ๋ ค์‚ฌํ•ญ์„ ์ตœ๋Œ€ํ•œ ๋ฐ˜์˜ํ•˜์˜€์œผ๋‚˜, ์‚ฌ์šฉ์ž๋Š” ํ•ญ์ƒ ๊ฒฐ๊ณผ๋ฅผ ๋น„ํŒ์ ์œผ๋กœ ๊ฒ€ํ† ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct")

Score

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match โ†‘ 0.6490 ยฑ 0.0131
strict-match 5 exact_match โ†‘ 0.0023 ยฑ 0.0013
gsm8k-ko 3 flexible-extract 5 exact_match โ†‘ 0.3275 ยฑ 0.0134
strict-match 5 exact_match โ†‘ 0.2737 ยฑ 0.0134
ifeval 4 none 5 inst_level_loose_acc โ†‘ 0.8058 ยฑ N/A
none 5 inst_level_strict_acc โ†‘ 0.7686 ยฑ N/A
none 5 prompt_level_loose_acc โ†‘ 0.7320 ยฑ 0.0191
none 5 prompt_level_strict_acc โ†‘ 0.6858 ยฑ 0.0200
Tasks Version Filter n-shot Metric Value Stderr
haerae 1 none acc โ†‘ 0.4180 ยฑ 0.0148
none acc_norm โ†‘ 0.4180 ยฑ 0.0148
- haerae_general_knowledge 1 none 5 acc โ†‘ 0.3125 ยฑ 0.0350
none 5 acc_norm โ†‘ 0.3125 ยฑ 0.0350
- haerae_history 1 none 5 acc โ†‘ 0.3404 ยฑ 0.0347
none 5 acc_norm โ†‘ 0.3404 ยฑ 0.0347
- haerae_loan_word 1 none 5 acc โ†‘ 0.4083 ยฑ 0.0379
none 5 acc_norm โ†‘ 0.4083 ยฑ 0.0379
- haerae_rare_word 1 none 5 acc โ†‘ 0.4815 ยฑ 0.0249
none 5 acc_norm โ†‘ 0.4815 ยฑ 0.0249
- haerae_standard_nomenclature 1 none 5 acc โ†‘ 0.4771 ยฑ 0.0405
none 5 acc_norm โ†‘ 0.4771 ยฑ 0.0405
Tasks Version Filter n-shot Metric Value Stderr
kobest_boolq 1 none 5 acc โ†‘ 0.7664 ยฑ 0.0113
none 5 f1 โ†‘ 0.7662 ยฑ N/A
kobest_copa 1 none 5 acc โ†‘ 0.5620 ยฑ 0.0157
none 5 f1 โ†‘ 0.5612 ยฑ N/A
kobest_hellaswag 1 none 5 acc โ†‘ 0.3840 ยฑ 0.0218
none 5 acc_norm โ†‘ 0.4900 ยฑ 0.0224
none 5 f1 โ†‘ 0.3807 ยฑ N/A
kobest_sentineg 1 none 5 acc โ†‘ 0.5869 ยฑ 0.0247
none 5 f1 โ†‘ 0.5545 ยฑ N/A
kobest_wic 1 none 5 acc โ†‘ 0.4952 ยฑ 0.0141
none 5 f1 โ†‘ 0.4000 ยฑ N/A
@article{Llama3.2RabbitKo3BInstruct,
  title={CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct Card},
  author={CarrotAI (L, GEUN)},
  year={2024},
  url = {https://huggingface.co./CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct}
}
Downloads last month
1,028
Safetensors
Model size
3.21B params
Tensor type
BF16
ยท
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct

Finetuned
(165)
this model
Finetunes
1 model
Merges
4 models
Quantizations
4 models

Dataset used to train CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct

Collection including CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct