Claude Skills Guide

Claude Code for Unsloth Fast Fine Tuning Workflow Tutorial

Fine-tuning large language models has become essential for developers building specialized AI applications. Unsloth, an optimized fine-tuning library, makes this process significantly faster by reducing memory usage and speeding up training. When combined with Claude Code, you get a powerful workflow that automates repetitive tasks and accelerates your fine-tuning pipeline.

This tutorial walks you through setting up and using Claude Code with Unsloth for efficient LLM fine-tuning.

Understanding Unsloth’s Speed Advantages

Unsloth achieves remarkable speed improvements through several key optimizations:

These optimizations allow you to fine-tune models like Llama 3, Mistral, and Phi-3 on consumer hardware with significantly reduced training times.

Setting Up Your Development Environment

Before diving into the workflow, ensure your environment is properly configured. First, install the necessary dependencies:

pip install unsloth transformers torch accelerate peft
pip install bitsandbytes scipy trl

Verify your CUDA installation for optimal performance:

import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")

Create a new Claude Code project for your fine-tuning workflow:

claude project create unsloth-finetune
cd unsloth-finetune

Building Your Fine-Tuning Pipeline

Step 1: Data Preparation

Organize your training data in the appropriate format. Unsloth works well with JSONL files containing prompt-response pairs:

{"prompt": "Summarize this article:", "response": "The article discusses..."}
{"prompt": "Translate to Spanish:", "response": "Hola mundo..."}

Use Claude Code to validate and preprocess your dataset:

from datasets import load_dataset

def format_dataset(examples):
    # Format prompts with instruction template
    formatted = []
    for prompt, response in zip(examples['prompt'], examples['response']):
        formatted.append(f"### Instruction\n{prompt}\n\n### Response\n{response}")
    return {'text': formatted}

dataset = load_dataset('json', data_files='train.jsonl')
dataset = dataset.map(format_dataset, batched=True)

Step 2: Model Configuration

Initialize your Unsloth model with optimal settings:

from unsloth import FastLanguageModel
import torch

# Load model with 4-bit quantization for memory efficiency
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/llama-3-8b-bnb-4bit",
    max_seq_length=2048,
    dtype=torch.float16,
    load_in_4bit=True,
)

# Add LoRA adapters for efficient fine-tuning
model = FastLanguageModel.get_peft_model(
    model,
    r=16,
    target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
    lora_alpha=16,
    lora_dropout=0,
    bias="none",
    use_gradient_checkpointing="unsloth",
)

Step 3: Training Configuration

Configure the training arguments using SFTTrainer from TRL:

from trl import SFTTrainer
from transformers import TrainingArguments

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset['train'],
    dataset_text_field="text",
    max_seq_length=2048,
    dataset_num_proc=4,
    packing=True,
    args=TrainingArguments(
        per_device_train_batch_size=2,
        gradient_accumulation_steps=4,
        warmup_steps=10,
        num_train_epochs=3,
        learning_rate=2e-4,
        fp16=not torch.cuda.is_bf16_supported(),
        bf16=torch.cuda.is_bf16_supported(),
        logging_steps=1,
        save_strategy="epoch",
        output_dir="outputs",
        optim="adamw_8bit",
    ),
)

trainer.train()

Automating Workflow with Claude Code

Claude Code excels at automating repetitive tasks in your fine-tuning workflow. Create a Claude Code skill to streamline common operations:

# claude_skill.yaml
name: unsloth-finetune
description: Automate Unsloth fine-tuning workflows

actions:
  - name: prepare-dataset
    description: Clean and format training data
    code: |
      # Data cleaning and validation logic
      pass

  - name: train-model
    description: Execute training with optimal parameters
    code: |
      # Training execution
      pass

  - name: evaluate-model
    description: Run evaluation on test set
    code: |
      # Evaluation logic
      pass

Use Claude Code’s agent capabilities to iterate on prompts and datasets:

claude "Analyze my training dataset and suggest improvements for better model performance"

Optimization Tips and Best Practices

Memory Optimization

When working with larger models, implement these memory-saving techniques:

# Enable gradient checkpointing to save memory
model.enable_input_require_grads()

# Use 4-bit quantization for inference
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/llama-3-8b-bnb-4bit",
    load_in_4bit=True,
)

Training Speed Improvements

Validation and Testing

Always validate your fine-tuned model before deployment:

FastLanguageModel.for_inference(model)

# Test inference
inputs = tokenizer([
    "### Instruction\nSummarize: The quick brown fox...\n\n### Response:"
], return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))

Common Pitfalls and How to Avoid Them

  1. Overfitting on small datasets: Use appropriate validation splits and monitor loss curves
  2. Incorrect data formatting: Ensure consistent prompt templates throughout your dataset
  3. Insufficient training steps: Start with more epochs and reduce based on validation performance
  4. Memory issues: Reduce batch size and enable gradient checkpointing

Conclusion

Combining Claude Code with Unsloth creates a powerful fine-tuning workflow that significantly reduces development time while maintaining high model quality. By automating data preparation, model configuration, and training processes, you can focus on iterative improvements and experimentation.

Start with smaller models like Phi-3 or Mistral 7B to understand the workflow, then scale to larger models as you gain confidence. The key is iterative development—train, evaluate, refine your data, and retrain.

Remember to monitor GPU memory usage and adjust parameters accordingly. With practice, you’ll develop an intuition for optimal configurations that balance speed, memory efficiency, and model performance.

Happy fine-tuning!