chlee10's picture
Update README.md
cad4e7b verified
|
raw
history blame
3.1 kB
metadata
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
datasets:
  - maywell/ko_Ultrafeedback_binarized
base model:
  - meta-llama/Meta-Llama-3-8B-Instruct

image/png

T3Q-Llama3-8B-Inst-sft1.0

This model is a version of meta-llama/Meta-Llama-3-8B-Instruct that has been fine-tuned with SFT.

Model Developers Chihoon Lee(chihoonlee10), T3Q

Transformers pipeline

import transformers
import torch

model_id = "meta-llama/Meta-Llama-3-8B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])

Transformers AutoModelForCausalLM

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "meta-llama/Meta-Llama-3-8B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

hf (pretrained=chlee10/T3Q-Llama3-8B-Inst-sft1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None

|      Task      |Version| Metric |Value |   |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq    |      0|acc     |0.5114|±  |0.0133|
|                |       |macro_f1|0.3546|±  |0.0080|
|kobest_copa     |      0|acc     |0.6000|±  |0.0155|
|                |       |macro_f1|0.5997|±  |0.0155|
|kobest_hellaswag|      0|acc     |0.4120|±  |0.0220|
|                |       |acc_norm|0.5380|±  |0.0223|
|                |       |macro_f1|0.4084|±  |0.0219|
|kobest_sentineg |      0|acc     |0.5063|±  |0.0251|
|                |       |macro_f1|0.3616|±  |0.0169|