Skip to main content
← Codelabs
Intermediate·90 mins·Jan 31, 2026

Fine-Tune an LLM in Google Colab with Unsloth and Your Own Custom Data

Overview

This codelab walks you through fine-tuning a large language model (LLM) on your own custom data entirely inside Google Colab, using Unsloth for faster, memory-efficient training. You'll use LoRA (Low-Rank Adaptation) or QLoRA (quantized LoRA) so that models like Llama 3.1 8B can be fine-tuned on a free Colab T4 GPU with about 15GB VRAM.

You will learn how to set up Colab with the right runtime, install Unsloth, choose and load a base model, prepare and format your custom dataset (e.g. question–answer pairs or conversations), apply the correct chat template, run training, and finally save the adapter and run inference. By the end you'll have a reproducible pipeline to adapt any compatible open-source LLM to your domain or task using your own data.

Start codelab

Prerequisites

  • Google account (for Colab)
  • Basic Python (variables, functions, loops)
  • Familiarity with Jupyter/Colab notebooks

What you will build

01

Colab setup and Unsloth install

Enable a GPU runtime in Google Colab and install Unsloth plus dependencies so you can fine-tune LLMs with minimal VRAM.

12m
02

Choose a model and load it with Unsloth

Pick a base instruct model (e.g. Llama 3.1 8B), load it in 4-bit with Unsloth for QLoRA, and prepare it for training.

15m
03

Prepare your custom dataset

Structure your data as question–answer pairs or conversations, load from CSV/JSON/Hugging Face, and decide on format (Alpaca vs chat).

18m
04

Apply chat template and format data for training

Use Unsloth's chat template for your model, format each example into a single text string, tokenize, and create the training dataset.

20m
05

Train with LoRA

Run supervised fine-tuning with the Hugging Face Trainer, set learning rate and batch size, monitor loss, and avoid overfitting.

22m
06

Save, evaluate, and run inference

Load the saved LoRA adapter, run FastLanguageModel.for_inference, test with sample prompts, and optionally push to Hugging Face or export to GGUF.

15m

Codelabs are continuously evolving. Spot an issue or want to contribute? Reach out and help polish these workshops.