Ko-Luxia-8B-it-v0.2 / README.md
MDDDDR's picture
Update README.md
32a9357 verified
|
raw
history blame
1.76 kB
metadata
datasets:
  - kyujinpy/KOpen-platypus
language:
  - ko
  - en
pipeline_tag: text-generation

Model Card for Model ID

base_model : Ko-Llama3-Luxia-8B

Basic usage

# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("MDDDDR/Ko-Luxia-8B-it-v0.2")
model = AutoModelForCausalLM.from_pretrained(
    "MDDDDR/Ko-Luxia-8B-it-v0.2",
    device_map="auto",
    torch_dtype=torch.bfloat16
)

input_text = "사과가 뭐야?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Training dataset

dataset : kyujinpy/KOpen-platypus

Hardware

RTX 3090 Ti 24GB x 1

Model Benchmark Results

Tasks Version Filter n-shot Metric Value Stderr
kobest_boolq 1 none 0 acc 0.5278 ± 0.0133
none 0 f1 0.3954 ± N/A
kobest_copa 1 none 0 acc 0.7380 ± 0.0139
none 0 f1 0.7372 ± N/A
kobest_hellaswag 1 none 0 acc 0.4800 ± 0.0224
none 0 acc_norm 0.6180 ± 0.0218
none 0 f1 0.4774 ± N/A
kobest_sentineg 1 none 0 acc 0.5390 ± 0.0250
none 0 f1 0.5037 ± N/A