Edit model card

Model Name : ํ’‹ํ’‹์ด(futfut)

Model Concept

  • ํ’‹์‚ด ๋„๋ฉ”์ธ ์นœ์ ˆํ•œ ๋„์šฐ๋ฏธ ์ฑ—๋ด‡์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•ด LLM ํŒŒ์ธํŠœ๋‹๊ณผ RAG๋ฅผ ์ด์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.
  • Base Model : zephyr-7b-beta
  • ํ’‹ํ’‹์ด์˜ ๋งํˆฌ๋Š” 'ํ•ด์š”'์ฒด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ง๋์— '์–ผ๋งˆ๋“ ์ง€ ๋ฌผ์–ด๋ณด์„ธ์š”! ํ’‹ํ’‹!'๋กœ ์ข…๋ฃŒํ•ฉ๋‹ˆ๋‹ค.

Serving by Fast API

Summary:

  • Unsloth ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ LoRA ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

  • SFT Trainer๋ฅผ ํ†ตํ•ด ํ›ˆ๋ จ์„ ์ง„ํ–‰

  • ํ™œ์šฉ ๋ฐ์ดํ„ฐ

    • q_a_korean_futsal
      • ๋งํˆฌ ํ•™์Šต์„ ์œ„ํ•ด 'ํ•ด์š”'์ฒด๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์ธ์‚ฟ๋ง์„ ๋„ฃ์–ด ๋ชจ๋ธ ์ปจ์…‰์„ ์œ ์ง€ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
  • Environment : Colab ํ™˜๊ฒฝ์—์„œ ์ง„ํ–‰ํ•˜์˜€์œผ๋ฉฐ L4 GPU๋ฅผ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.

    Model Load

    
    #!pip install transformers==4.40.0 accelerate
    import os
    import torch
    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    model_id = 'Dongwookss/small_fut_final'
    
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        torch_dtype=torch.bfloat16,
        device_map="auto",
    )
    model.eval()
    

    Query

from transformers import TextStreamer
PROMPT = '''Below is an instruction that describes a task. Write a response that appropriately completes the request.
์ œ์‹œํ•˜๋Š” context์—์„œ๋งŒ ๋Œ€๋‹ตํ•˜๊ณ  context์— ์—†๋Š” ๋‚ด์šฉ์€ ๋ชจ๋ฅด๊ฒ ๋‹ค๊ณ  ๋Œ€๋‹ตํ•ด'''

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

text_streamer = TextStreamer(tokenizer)
_ = model.generate(
    input_ids,
    max_new_tokens=4096,
    eos_token_id=terminators,
    do_sample=True,
    streamer = text_streamer,
    temperature=0.6,
    top_p=0.9,
    repetition_penalty = 1.1
)
  
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train Dongwookss/small_fut_final

Space using Dongwookss/small_fut_final 1