Dataset

This model finetune on airesearch/WangchanThaiInstruct

23 sep 2024

Training details:

  • epochs: 1
  • learning rate: 2e-4
  • learning rate scheduler type: linear
  • Warmup ratio: 0.3
  • cutoff len (i.e. context length): 2048
  • global batch size: 8
  • fine-tuning type: qlora
  • optimizer: adamw_8bit

ps. 12 Hours from T4 Kaggle

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Konthee/Llama-3.1-8B-ThaiInstruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, torch_dtype="auto", device_map="auto"
)

messages = [
    {"role": "user", "content": "สอนภาษาไทยหน่อย"},
]

input_ids = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

outputs = model.generate(
    input_ids,
    max_new_tokens=4096,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

Uploaded model

  • Developed by: Konthee
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
91
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Konthee/Llama-3.1-8B-ThaiInstruct

Finetuned
(900)
this model
Quantizations
2 models

Dataset used to train Konthee/Llama-3.1-8B-ThaiInstruct