{ "cells": [ { "cell_type": "markdown", "source": [ "To run this, press \"*Runtime*\" and press \"*Run all*\" on a **free** Tesla T4 Google Colab instance!\n", "
\n", "\n", "To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth#installation-instructions---conda).\n", "\n", "You will learn how to do [data prep](#Data), how to [train](#Train), how to [run the model](#Inference), & [how to save it](#Save) (eg for Llama.cpp).\n", "\n", "See on our [blog post](https://unsloth.ai/blog/gemma) on how we made **Gemma 7b 2.5x faster** and **Gemma 2b 2x faster**!" ], "metadata": { "id": "IqM-T1RTzY6C" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2eSvM9zX_2d3" }, "outputs": [], "source": [ "%%capture\n", "# Installs Unsloth, Xformers (Flash Attention) and all other packages!\n", "!pip install unsloth\n", "# Get latest Unsloth\n", "!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"" ] }, { "cell_type": "markdown", "source": [ "* We support Llama, Mistral, CodeLlama, TinyLlama, Vicuna, Open Hermes etc\n", "* And Yi, Qwen ([llamafied](https://huggingface.co./models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n", "* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n", "* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n", "* [**NEW**] With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co./unsloth) has Llama, Mistral 4bit models." ], "metadata": { "id": "r2v_X2fA0Df5" } }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 385, "referenced_widgets": [ "1f9b5284d7ac40f0beae41acfd6c7f62", "78be9340df944dbb8fe857d641e096ea", "63861061417c453da4873d76bf6a4e55", "6e2cb14dd0cf4ff381915084d1fe92e1", "d86db4fd81c94a8c9c48d46e3820488c", "f1ec0cc6439845f3be6824505f14fc70", "cbd1a312a9264b7ba0d4b6b9e1ebb10b", "6aabab17366044eaa980bbadc48f4f2f", "fd739012822541c9b96bb46cb95b23ac", "e6601fbc3bc743d487a184c2ab375fa9", "37145878d083460f8fff02127f34cf41", "5242a17835764e9285a183e0fdd182ac", "00291b5bdbde4f6698f00e3a69297943", "59a25cb526d44a5a940e202152bed008", "bda5e969cb9e425aaf12192d002cb504", "d0a11e0b5c4c469ebac3796b8394d693", "6954e969d6ba42309cc3062edb097024", "76a4ce00e0f243a9ab98b593a9ebc243", "2f2e08e690e04b44a9e96597a63915f6", "ec91b7a9907d41159aac8aaf29079e28", "09bbe6b538eb4c8eb120d8757c685e80", "7263f7eef98d454faa6bc7adf3ca0aea", "84cddcbfd3b7483287a9e450d0128114", "e39907e22e7e4507b567501c14c7a436", "2b12695f13a7418d863f413d7c735c0e", "ee1050424f8845bab3723f118504e8c6", "fb72b76a165843a289aa56f9dcc5ceb0", "0ec6d0df052944209e596ecb1bba0598", "c90a5c749c3645ad8100c405a347a512", "52ca1ed1b4b34d67ae68d4a5b1776aab", "27892941bbb54ad9b9d620a5cebc5939", "96eba3df6fc34338b8ac0d2d7deb4393", "76741f9f11064a16a3ea4882af0cf25d", "d0776e74927b4819bc87eca981b3a7b8", "e02f182cda3d4d68b0daa5c4ae7674ac", "be61b42787704da1911367fe368d99c1", "00b21c791faf4e5d9370ccfc5838147f", "958e7b7857d64b5cab5923e4440fd094", "bcdf494a4d52411092d26287fddc5446", "02c83fef850749c283a43b98bc2d499a", "5678f0c46753468ab21ba43641316df3", "555ca3185a084cae9baa996b45e848c7", "c27165aa15204124b6d22e4fa1a683b5", "c834a08b82b942d0a5e45315852b3301", "3ed82a0a904c4626be80ab5e4de94f98", "4bab62c35d33434ebf0d711f65f9c1be", "ac6a39ddea724aa3854e9565ce59adf9", "91e8c483b92a4197887c9a30bfd60501", "f0c6d5209b0d424bb24527060526ce19", "8efd2af34e5c4f439012718545ff756f", "65ea2f319a304a148bc8db50dd0ff99d", "b0358483e49342818785aa40ed0280e0", "8f52986a2a714b2a93d1ef52d7687689", "095ba563cbf040acb1ca84e951c20c86", "bf29a0e5f7b3435692099559528e4a54", "b580f55bfcb44079befd68c85e25cb0b", "083d919afa8f473490b42debf842e201", "a548789e485446a9a41aed8d7cd89779", "a3f3a55c309d4f9db7d5107e4e29da26", "ed3475930ea746dcb5d57f2e5d6864d7", "b80e89c062cb42d187473222dcca3062", "f26cfb3119774c1aa18b4f1957dd721f", "8ac34ce28c064b66b84898fb8817a0e9", "615850f111fa4fbea0dadfeedbe0a3c8", "f2c27d6099234b36a684dbed6585d5be", "13b9aaf1122d45fdacda3a538931aad5", "b21a5d8b0d0940a38eab0a0d466cb64e", "6742d77b371d4bfdbe7bc34e751740c0", "570dfcd6d1c94d59b86fddb038ea4c7d", "aabfa8e68a31404883c66b0fd6d67883", "751f91346de64ac89ecf1e8af8f7bfd3", "b19bca82ff0c46378802ce8e6c1a3757", "a2d03fca2bb5496cb3d4b24d950313b2", "61e745eb141148f0879f9468450fde60", "790dd907de0a41378ef5dfa3142ac0d4", "b312441a69fe466f8089b4d1d27e3247", "6b014e76b05c4e74a60193a7663d0282" ] }, "id": "QmUBVEnvCDJv", "outputId": "66693dbb-5f95-4394-d30d-1a99aea504f2" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "config.json: 0%| | 0.00/1.11k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "1f9b5284d7ac40f0beae41acfd6c7f62" } }, "metadata": {} }, { "output_type": "stream", "name": "stdout", "text": [ "==((====))== Unsloth: Fast Gemma patching release 2024.3\n", " \\\\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.\n", "O^O/ \\_/ \\ Pytorch: 2.2.1+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.\n", "\\ / Bfloat16 = FALSE. Xformers = 0.0.24. FA = False.\n", " \"-____-\" Free Apache license: http://github.com/unslothai/unsloth\n" ] }, { "output_type": "stream", "name": "stderr", "text": [ "/usr/local/lib/python3.10/dist-packages/transformers/quantizers/auto.py:155: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used.\n", " warnings.warn(warning_msg)\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "model.safetensors: 0%| | 0.00/5.57G [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "5242a17835764e9285a183e0fdd182ac" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "generation_config.json: 0%| | 0.00/137 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "84cddcbfd3b7483287a9e450d0128114" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer_config.json: 0%| | 0.00/2.16k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "d0776e74927b4819bc87eca981b3a7b8" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer.model: 0%| | 0.00/4.24M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "3ed82a0a904c4626be80ab5e4de94f98" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer.json: 0%| | 0.00/17.5M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "b580f55bfcb44079befd68c85e25cb0b" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "special_tokens_map.json: 0%| | 0.00/636 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "b21a5d8b0d0940a38eab0a0d466cb64e" } }, "metadata": {} } ], "source": [ "from unsloth import FastLanguageModel\n", "import torch\n", "max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!\n", "dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n", "load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.\n", "\n", "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n", "fourbit_models = [\n", " \"unsloth/mistral-7b-bnb-4bit\",\n", " \"unsloth/mistral-7b-instruct-v0.2-bnb-4bit\",\n", " \"unsloth/llama-2-7b-bnb-4bit\",\n", " \"unsloth/gemma-7b-bnb-4bit\",\n", " \"unsloth/gemma-7b-it-bnb-4bit\", # Instruct version of Gemma 7b\n", " \"unsloth/gemma-2b-bnb-4bit\",\n", " \"unsloth/gemma-2b-it-bnb-4bit\", # Instruct version of Gemma 2b\n", "] # More models at https://huggingface.co./unsloth\n", "\n", "model, tokenizer = FastLanguageModel.from_pretrained(\n", " model_name = \"unsloth/gemma-7b-bnb-4bit\", # Choose ANY! eg teknium/OpenHermes-2.5-Mistral-7B\n", " max_seq_length = max_seq_length,\n", " dtype = dtype,\n", " load_in_4bit = load_in_4bit,\n", " # token = \"hf_...\", # use one if using gated models like meta-llama/Llama-2-7b-hf\n", ")" ] }, { "cell_type": "markdown", "source": [ "We now add LoRA adapters so we only need to update 1 to 10% of all parameters!" ], "metadata": { "id": "SXd9bTZd1aaL" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6bZsfBuZDeCL", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "c1baeb51-165d-4f4e-83ca-f8ffc8222ccf" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "Unsloth 2024.3 patched 28 layers with 28 QKV layers, 28 O layers and 28 MLP layers.\n" ] } ], "source": [ "model = FastLanguageModel.get_peft_model(\n", " model,\n", " r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128\n", " target_modules = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n", " \"gate_proj\", \"up_proj\", \"down_proj\",],\n", " lora_alpha = 16,\n", " lora_dropout = 0, # Supports any, but = 0 is optimized\n", " bias = \"none\", # Supports any, but = \"none\" is optimized\n", " use_gradient_checkpointing = True,\n", " random_state = 3407,\n", " use_rslora = False, # We support rank stabilized LoRA\n", " loftq_config = None, # And LoftQ\n", ")" ] }, { "cell_type": "markdown", "source": [ "\n", "### Data Prep\n", "We now use the Alpaca dataset from [yahma](https://huggingface.co./datasets/yahma/alpaca-cleaned), which is a filtered version of 52K of the original [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html). You can replace this code section with your own data prep.\n", "\n", "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co./docs/trl/sft_trainer#train-on-completions-only).\n", "\n", "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n", "\n", "If you want to use the `ChatML` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing).\n", "\n", "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)." ], "metadata": { "id": "vITh0KVJ10qX" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LjY75GoYUCB8", "colab": { "base_uri": "https://localhost:8080/", "height": 145, "referenced_widgets": [ "fc06e86eb977492e843157bcbe5e4884", "b0677fbb5aeb4b3a9c16f03c6caa15ee", "077e7ddaf98b468aa5110b43295ca455", "339122ace10f440c95990a6ead8ea74b", "0a829eb7d22c4a99a8b1645e045aafe0", "6c3941826efb43f3a2c664b9dc7124a9", "22e28a93b43a494fa113aeb1b276260d", "2af0967a93d84c9c9390aae966880542", "c96a3ba973a24ed2a6ec703255e1b869", "4c679a1a259840ccb540893ade215723", "3e8ca7105d2d4ff8afdad1c75f4f00c8", "9e0d496bcf954d5690b360c241b9f699", "08d3986daea8460481990642635c762d", "c638224e495b4d02bcf93bb993a04608", "13b982e310d0440c96dd2f59fa5eafaa", "a1d519dde1464f028fecaf0080951ef4", "48297bc87b604c68a9e7281f23cdbfee", "dd057d017b7f49da9dbdbf76639a0618", "879142eadb3a45c6903f81fd9c3fd9c5", "8d41716e6ed048139565b0bd3c25a1ad", "518f6dd5817b4d36965d2cf84c8b3cbd", "4f55c2f5da9e42baa1ce56eedeee09be", "d89b5a6e73c9453f8627b7fff996797e", "6d196e71f6d048a897cffa12040fa9fb", "2840d220bc3a4ee98f2d5e1a02e01058", "788c7488c4784d85abd173897f5c6414", "dfed5c25c29145528dd482d4fbd9e29c", "ad55068b22dc42f5b5e5b58077769ef0", "7119cddd8d504e1db92d4b639806f00a", "a08ebc53a07b43f3873d89802b667018", "e26fc9e95d0d4972a3f987dd348e56a7", "692e4a81b76b4b22a9d5300a672bbd3f", "380cf17b22004120822515ab9d44beb6", "84b42451daf7432fb17fa670f78c22fe", "d9ef20e1474646c1922534b6be20daef", "acdeefe570aa4a259ab5e659438513a4", "7ddd0b85cc624ed6a0bde81c09953365", "ecc10dba505047ecba4641bf92cebc70", "8790948fb64c42d280638d69f4e886dc", "2567958ef90b4bf49cf76059a48111d5", "547e0a8c7129457cae3ebe759fb0595f", "b3005b4bdfb84c549ad98de047dcb21d", "f9eabd2903424e72b9493f62733a412b", "1172a2311ec54ed7aebaec740e711d57" ] }, "outputId": "cd4d6fb5-4e9f-4844-f6fd-d40cf3ec9f17" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "Downloading readme: 0%| | 0.00/11.6k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "fc06e86eb977492e843157bcbe5e4884" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Downloading data: 0%| | 0.00/44.3M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "9e0d496bcf954d5690b360c241b9f699" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Generating train split: 0 examples [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "d89b5a6e73c9453f8627b7fff996797e" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Map: 0%| | 0/51760 [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "84b42451daf7432fb17fa670f78c22fe" } }, "metadata": {} } ], "source": [ "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "{}\n", "\n", "### Input:\n", "{}\n", "\n", "### Response:\n", "{}\"\"\"\n", "\n", "EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN\n", "def formatting_prompts_func(examples):\n", " instructions = examples[\"instruction\"]\n", " inputs = examples[\"input\"]\n", " outputs = examples[\"output\"]\n", " texts = []\n", " for instruction, input, output in zip(instructions, inputs, outputs):\n", " # Must add EOS_TOKEN, otherwise your generation will go on forever!\n", " text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN\n", " texts.append(text)\n", " return { \"text\" : texts, }\n", "pass\n", "\n", "from datasets import load_dataset\n", "dataset = load_dataset(\"yahma/alpaca-cleaned\", split = \"train\")\n", "dataset = dataset.map(formatting_prompts_func, batched = True,)" ] }, { "cell_type": "markdown", "source": [ "\n", "### Train the model\n", "Now let's use Huggingface TRL's `SFTTrainer`! More docs here: [TRL SFT docs](https://huggingface.co./docs/trl/sft_trainer). We do 60 steps to speed things up, but you can set `num_train_epochs=1` for a full run, and turn off `max_steps=None`. We also support TRL's `DPOTrainer`!" ], "metadata": { "id": "idAEIeSQ3xdS" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "95_Nn-89DhsL", "colab": { "base_uri": "https://localhost:8080/", "height": 122, "referenced_widgets": [ "388a2aaff9cd4725b78eb70136d12284", "a18a1a5abd814bfe876cc029ce75638b", "cf9f06653dd84a508b25a7d7ab412ae6", "9ba830d34c6d444c9d2c0c8d289a982c", "dc158fbc11ee405e861dcdce7a08639e", "f5ba9127d9e74833953d9e67844a84d8", "04e9f7bea00e40cca234faf98e23daa5", "ba5ce0d461074b2ba9323c01f764304b", "53f0301919644ea49cfcc35bf527f746", "d2766f23e0c2446996a0d7ed42cf430d", "43b89f289be544d5bb9bd8ee96f10f8c" ] }, "outputId": "1e1192a5-06b4-4f73-cabe-3fe73d835695" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "Map (num_proc=2): 0%| | 0/51760 [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "388a2aaff9cd4725b78eb70136d12284" } }, "metadata": {} }, { "output_type": "stream", "name": "stderr", "text": [ "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches', 'even_batches', 'use_seedable_sampler']). Please pass an `accelerate.DataLoaderConfiguration` instead: \n", "dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False, even_batches=True, use_seedable_sampler=True)\n", " warnings.warn(\n" ] } ], "source": [ "from trl import SFTTrainer\n", "from transformers import TrainingArguments\n", "\n", "trainer = SFTTrainer(\n", " model = model,\n", " tokenizer = tokenizer,\n", " train_dataset = dataset,\n", " dataset_text_field = \"text\",\n", " max_seq_length = max_seq_length,\n", " dataset_num_proc = 2,\n", " packing = False, # Can make training 5x faster for short sequences.\n", " args = TrainingArguments(\n", " per_device_train_batch_size = 2,\n", " gradient_accumulation_steps = 4,\n", " warmup_steps = 5,\n", " max_steps = 60,\n", " learning_rate = 2e-4,\n", " fp16 = not torch.cuda.is_bf16_supported(),\n", " bf16 = torch.cuda.is_bf16_supported(),\n", " logging_steps = 1,\n", " optim = \"adamw_8bit\",\n", " weight_decay = 0.01,\n", " lr_scheduler_type = \"linear\",\n", " seed = 3407,\n", " output_dir = \"outputs\",\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2ejIt2xSNKKp", "colab": { "base_uri": "https://localhost:8080/" }, "cellView": "form", "outputId": "9489e2a7-2c8c-47e6-dcc9-44300a4c97e6" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "GPU = Tesla T4. Max memory = 14.748 GB.\n", "5.938 GB of memory reserved.\n" ] } ], "source": [ "#@title Show current memory stats\n", "gpu_stats = torch.cuda.get_device_properties(0)\n", "start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n", "max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n", "print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n", "print(f\"{start_gpu_memory} GB of memory reserved.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yqxqAZ7KJ4oL", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "outputId": "b753d4aa-db8d-4129-9a4e-53194865a615" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1\n", " \\\\ /| Num examples = 51,760 | Num Epochs = 1\n", "O^O/ \\_/ \\ Batch size per device = 2 | Gradient Accumulation steps = 4\n", "\\ / Total batch size = 8 | Total steps = 60\n", " \"-____-\" Number of trainable parameters = 50,003,968\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "Step | \n", "Training Loss | \n", "
---|---|
1 | \n", "1.751900 | \n", "
2 | \n", "2.306900 | \n", "
3 | \n", "1.609100 | \n", "
4 | \n", "1.755200 | \n", "
5 | \n", "1.426000 | \n", "
6 | \n", "1.370200 | \n", "
7 | \n", "0.986500 | \n", "
8 | \n", "1.162100 | \n", "
9 | \n", "1.005800 | \n", "
10 | \n", "1.075900 | \n", "
11 | \n", "0.902600 | \n", "
12 | \n", "0.938500 | \n", "
13 | \n", "0.865400 | \n", "
14 | \n", "1.012700 | \n", "
15 | \n", "0.847900 | \n", "
16 | \n", "0.853400 | \n", "
17 | \n", "0.969700 | \n", "
18 | \n", "1.194500 | \n", "
19 | \n", "0.954200 | \n", "
20 | \n", "0.843800 | \n", "
21 | \n", "0.842700 | \n", "
22 | \n", "0.877500 | \n", "
23 | \n", "0.829100 | \n", "
24 | \n", "0.937000 | \n", "
25 | \n", "1.034500 | \n", "
26 | \n", "1.017000 | \n", "
27 | \n", "1.001900 | \n", "
28 | \n", "0.847000 | \n", "
29 | \n", "0.808900 | \n", "
30 | \n", "0.830700 | \n", "
31 | \n", "0.815100 | \n", "
32 | \n", "0.846000 | \n", "
33 | \n", "0.935700 | \n", "
34 | \n", "0.799200 | \n", "
35 | \n", "0.878700 | \n", "
36 | \n", "0.814500 | \n", "
37 | \n", "0.821100 | \n", "
38 | \n", "0.726300 | \n", "
39 | \n", "1.035300 | \n", "
40 | \n", "1.112300 | \n", "
41 | \n", "0.881100 | \n", "
42 | \n", "0.901100 | \n", "
43 | \n", "0.895500 | \n", "
44 | \n", "0.845300 | \n", "
45 | \n", "0.885600 | \n", "
46 | \n", "0.885400 | \n", "
47 | \n", "0.813100 | \n", "
48 | \n", "1.125500 | \n", "
49 | \n", "0.858400 | \n", "
50 | \n", "1.005600 | \n", "
51 | \n", "0.960500 | \n", "
52 | \n", "0.895900 | \n", "
53 | \n", "0.944900 | \n", "
54 | \n", "1.107000 | \n", "
55 | \n", "0.746100 | \n", "
56 | \n", "1.003100 | \n", "
57 | \n", "0.863500 | \n", "
58 | \n", "0.796100 | \n", "
59 | \n", "0.788700 | \n", "
60 | \n", "0.867200 | \n", "
"
]
},
"metadata": {}
}
],
"source": [
"trainer_stats = trainer.train()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pCqnaKmlO1U9",
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "7b6f3381-fc91-4081-828d-a807ffaefaef"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"549.6228 seconds used for training.\n",
"9.16 minutes used for training.\n",
"Peak reserved memory = 11.002 GB.\n",
"Peak reserved memory for training = 5.064 GB.\n",
"Peak reserved memory % of max memory = 74.6 %.\n",
"Peak reserved memory for training % of max memory = 34.337 %.\n"
]
}
],
"source": [
"#@title Show final memory and time stats\n",
"used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
"used_memory_for_lora = round(used_memory - start_gpu_memory, 3)\n",
"used_percentage = round(used_memory /max_memory*100, 3)\n",
"lora_percentage = round(used_memory_for_lora/max_memory*100, 3)\n",
"print(f\"{trainer_stats.metrics['train_runtime']} seconds used for training.\")\n",
"print(f\"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.\")\n",
"print(f\"Peak reserved memory = {used_memory} GB.\")\n",
"print(f\"Peak reserved memory for training = {used_memory_for_lora} GB.\")\n",
"print(f\"Peak reserved memory % of max memory = {used_percentage} %.\")\n",
"print(f\"Peak reserved memory for training % of max memory = {lora_percentage} %.\")"
]
},
{
"cell_type": "markdown",
"source": [
"\n",
"### Inference\n",
"Let's run the model! You can change the instruction and input - leave the output blank!"
],
"metadata": {
"id": "ekOmTR1hSNcr"
}
},
{
"cell_type": "code",
"source": [
"# alpaca_prompt = Copied from above\n",
"FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"Continue the fibonnaci sequence.\", # instruction\n",
" \"1, 1, 2, 3, 5, 8\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
"tokenizer.batch_decode(outputs)"
],
"metadata": {
"id": "kR3gIAX-SM2q",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "d77da63c-2fd8-4eb8-9617-21e9af2a1b71"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"['