Efficient Fine-Tuning of Large Language Models - Minecraft AI Assistant Tutorial

This repository demonstrates how to fine-tune the Qwen 7B model to create "Andy," an AI assistant for Minecraft. Using the Unsloth framework, this tutorial showcases efficient fine-tuning with 4-bit quantization and LoRA for scalable training on limited hardware.

๐Ÿš€ Resources

Overview

This readme.md provides step-by-step instructions to:

  1. Install and set up the Unsloth framework.
  2. Initialize the Qwen 7B model with 4-bit quantization.
  3. Implement LoRA Adapters for memory-efficient fine-tuning.
  4. Prepare the Andy-3.5 dataset with Minecraft-specific knowledge.
  5. Configure and execute training in a resource-efficient manner.
  6. Evaluate and deploy the fine-tuned AI assistant.

Key Features

  • Memory-Efficient Training: Fine-tune large models on GPUs as low as T4 (Google Colab).
  • LoRA Integration: Modify only key model layers for efficient domain-specific adaptation.
  • Minecraft-Optimized Dataset: Format data using ChatML templates for seamless integration.
  • Accessible Hardware: Utilize cost-effective setups with GPU quantization techniques.

Prerequisites

  • Python Knowledge: Familiarity with basic programming concepts.
  • GPU Access: T4 (Colab Free Tier) is sufficient; higher-tier GPUs like V100/A100 recommended.
  • Optional: Hugging Face Account for model sharing.

Setup

Install the required packages:

!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes

Model Initialization

Load the Qwen 7B model with 4-bit quantization for reduced resource usage:

from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Qwen2.5-7B-bnb-4bit",
    max_seq_length=2048,
    dtype=torch.bfloat16,
    load_in_4bit=True,
    trust_remote_code=True,
)

Adding LoRA Adapters

Add LoRA to fine-tune specific layers efficiently:

model = FastLanguageModel.get_peft_model(
    model,
    r=16,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "embed_tokens", "lm_head"],
    lora_alpha=16,
    lora_dropout=0,
    use_gradient_checkpointing="unsloth",
)

Dataset Preparation

Prepare the Minecraft dataset (Andy-3.5):

from datasets import load_dataset
from unsloth.chat_templates import get_chat_template

dataset = load_dataset("Sweaterdog/Andy-3.5", split="train")
tokenizer = get_chat_template(tokenizer, chat_template="chatml")

Training Configuration

Set up the training parameters:

from trl import SFTTrainer
from transformers import TrainingArguments

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset,
    dataset_text_field="text",
    args=TrainingArguments(
        per_device_train_batch_size=16,
        max_steps=1000,
        learning_rate=2e-5,
        gradient_checkpointing=True,
        output_dir="outputs",
        fp16=True,
    ),
)

Clear unused memory before training:

import torch
torch.cuda.empty_cache()

Train the Model

Initiate training:

trainer_stats = trainer.train()

Save and Share

Save your fine-tuned model locally or upload to Hugging Face:

model.save_pretrained("andy_minecraft_assistant")

Optimization Tips

  • Expand the dataset for broader Minecraft scenarios.
  • Adjust training steps for better accuracy.
  • Fine-tune inference parameters for more natural responses.

For more details on Unsloth or to contribute, visit Unsloth GitHub.

Happy fine-tuning! ๐ŸŽฎ

Citation

@misc{celaya2025minecraft, author = {Christopher B. Celaya}, title = {Efficient Fine-Tuning of Large Language Models - A Minecraft AI Assistant Tutorial}, year = {2025}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/kolbytn/mindcraft}}, note = {\url{https://chris-celaya-blog.vercel.app/articles/unsloth-training}} }

Downloads last month
75
Safetensors
Model size
7.62B params
Tensor type
BF16
ยท
Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for chriscelaya/chris-ai

Base model

Qwen/Qwen2.5-7B
Quantized
(6)
this model