AI Tools Compared

Vagrant is still widely used for reproducible development environments, especially for teams that need to mirror complex Linux server setups locally. Writing Vagrantfiles — especially multi-machine configs with networking, shared folders, and provisioning — is tedious. This guide tests Claude, GPT-4, and GitHub Copilot on real Vagrant configuration tasks.

Task 1: Single-VM Development Environment

Prompt: “Create a Vagrantfile for a Node.js development environment on Ubuntu 22.04 with: NFS shared folder, forwarded ports for app (3000) and debugger (9229), 4GB RAM, provisioned with Node 20, pnpm, and nginx as reverse proxy.”

Claude’s output:

# Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"
  config.vm.box_version = "~> 20240301.0"

  config.vm.hostname = "dev-node"

  # Network
  config.vm.network "forwarded_port", guest: 3000, host: 3000, host_ip: "127.0.0.1"
  config.vm.network "forwarded_port", guest: 9229, host: 9229, host_ip: "127.0.0.1"
  config.vm.network "private_network", ip: "192.168.56.10"

  # NFS shared folder — requires 'vagrant-bindfs' plugin on macOS
  config.vm.synced_folder ".", "/vagrant", type: "nfs",
    nfs_udp: false,
    mount_options: ["rw", "vers=3", "tcp", "noatime", "actimeo=2"]

  # Provider settings
  config.vm.provider "virtualbox" do |vb|
    vb.name = "dev-node"
    vb.memory = 4096
    vb.cpus = 2
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
    vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
  end

  # Provisioning
  config.vm.provision "shell", inline: <<-SHELL
    set -euo pipefail

    # Update and install dependencies
    apt-get update -qq
    apt-get install -y -qq curl gnupg2 nginx

    # Install Node.js 20 via NodeSource
    curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
    apt-get install -y nodejs

    # Install pnpm
    corepack enable
    corepack prepare pnpm@latest --activate

    # Configure nginx as reverse proxy
    cat > /etc/nginx/sites-available/app << 'EOF'
server {
    listen 80;
    server_name localhost;
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
EOF
    ln -sf /etc/nginx/sites-available/app /etc/nginx/sites-enabled/
    rm -f /etc/nginx/sites-enabled/default
    systemctl enable nginx
    systemctl restart nginx

    echo "Node $(node --version), pnpm $(pnpm --version) installed."
  SHELL
end

Claude adds host_ip: "127.0.0.1" on forwarded ports (security best practice to avoid exposing ports on all interfaces), uses set -euo pipefail for safe provisioning scripts, and includes NFS mount options for performance.

GPT-4 produces similar output but typically omits the host_ip binding and set -euo pipefail. Functional but less secure by default.

Why host_ip: "127.0.0.1" Matters

Without host_ip, Vagrant binds forwarded ports on 0.0.0.0, which means port 3000 is accessible from any machine on your network — not just localhost. On a laptop used in shared offices or on public Wi-Fi, this exposes your development server. Claude applies this constraint automatically; GPT-4 only adds it when you specifically ask about security.

Task 2: Multi-Machine Setup

Prompt: “Vagrant config for a 3-tier app: load balancer (nginx, 1GB RAM), 2 app servers (Node.js, 2GB RAM each), 1 database server (PostgreSQL 16, 2GB RAM). Private network between all VMs. Database accessible only from app servers.”

Claude’s output:

# Vagrantfile — 4-node cluster
NODES = {
  "lb"      => { ip: "192.168.56.10", memory: 1024, cpus: 1, role: :loadbalancer },
  "app-01"  => { ip: "192.168.56.11", memory: 2048, cpus: 2, role: :app },
  "app-02"  => { ip: "192.168.56.12", memory: 2048, cpus: 2, role: :app },
  "db-01"   => { ip: "192.168.56.13", memory: 2048, cpus: 2, role: :database },
}

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"

  NODES.each do |name, node|
    config.vm.define name do |machine|
      machine.vm.hostname = name
      machine.vm.network "private_network", ip: node[:ip]

      machine.vm.provider "virtualbox" do |vb|
        vb.name = name
        vb.memory = node[:memory]
        vb.cpus = node[:cpus]
      end

      # Only expose LB to host
      if node[:role] == :loadbalancer
        machine.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
      end

      machine.vm.provision "shell" do |s|
        s.inline = provision_script(node[:role], NODES)
      end
    end
  end
end

def provision_script(role, nodes)
  app_ips = nodes.select { |_, n| n[:role] == :app }.map { |_, n| n[:ip] }.join(" ")
  db_ip   = nodes.find   { |_, n| n[:role] == :database }&.last&.fetch(:ip)

  case role
  when :loadbalancer
    <<~SHELL
      set -euo pipefail
      apt-get update -qq && apt-get install -y -qq nginx
      cat > /etc/nginx/conf.d/upstream.conf << EOF
upstream app_servers {
    least_conn;
    #{nodes.select { |_, n| n[:role] == :app }.map { |_, n| "    server #{n[:ip]}:3000;" }.join("\n")}
}
server {
    listen 80;
    location / { proxy_pass http://app_servers; }
}
EOF
      systemctl restart nginx
    SHELL
  when :app
    <<~SHELL
      set -euo pipefail
      curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
      apt-get install -y nodejs
      # Store DB connection for app
      echo "DATABASE_URL=postgresql://appuser:secret@#{db_ip}/appdb" >> /etc/environment
    SHELL
  when :database
    <<~SHELL
      set -euo pipefail
      apt-get update -qq
      apt-get install -y postgresql-16
      # Configure PostgreSQL to listen on private network
      sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '#{nodes.find { |_, n| n[:role] == :database }&.last&.fetch(:ip)}'/" \
        /etc/postgresql/16/main/postgresql.conf
      # Allow only app servers
      #{app_ips.split.map { |ip| "echo \"host appdb appuser #{ip}/32 scram-sha-256\" >> /etc/postgresql/16/main/pg_hba.conf" }.join("\n")}
      systemctl restart postgresql
    SHELL
  end
end

This uses a Ruby hash to define nodes, loops over them to avoid repetition, and generates provisioning scripts dynamically with correct IP references.

GPT-4’s Multi-Machine Approach

GPT-4 typically generates separate config.vm.define blocks for each machine rather than looping, which means 4 blocks of nearly identical configuration. It works, but adding a fifth machine requires copy-pasting another block. It also hardcodes the database IP in the app provisioning script rather than deriving it from the node map, so the config can get out of sync if IPs change.

For two or three machines, GPT-4’s flat approach is readable. For larger clusters or clusters where machine count changes, Claude’s data-driven loop approach is far more maintainable.

Task 3: VMware vs VirtualBox Provider Config

Copilot handles provider-specific config well when you’re already in a Vagrantfile context:

# Copilot completes provider block when you type: config.vm.provider "vmware_desktop"
config.vm.provider "vmware_desktop" do |vmware|
  vmware.vmx["memsize"] = "4096"
  vmware.vmx["numvcpus"] = "2"
  vmware.vmx["ethernet0.pcislotnumber"] = "33"
end

# Conditional provider block — Copilot suggests this pattern
["virtualbox", "vmware_desktop"].each do |provider|
  config.vm.provider provider do |p|
    p.memory = 4096 if p.respond_to?(:memory=)
  end
end

The conditional provider block pattern works but is imprecise — most provider objects respond to memory= even when the provider isn’t active. For cross-provider Vagrantfiles, Claude generates a cleaner pattern:

# Claude's cross-provider config
config.vm.provider "virtualbox" do |vb|
  vb.memory = 4096
  vb.cpus   = 2
end

config.vm.provider "vmware_desktop" do |vmware|
  vmware.vmx["memsize"]  = "4096"
  vmware.vmx["numvcpus"] = "2"
end

config.vm.provider "parallels" do |prl|
  prl.memory = 4096
  prl.cpus   = 2
end

Vagrant only activates the block matching the current provider, so explicit per-provider blocks are cleaner than conditional logic.

Task 4: Ansible Provisioner Integration

For teams already using Ansible for server configuration, Claude generates the Ansible provisioner block correctly:

# Claude's Vagrantfile with Ansible provisioner
config.vm.provision "ansible" do |ansible|
  ansible.playbook          = "provisioning/site.yml"
  ansible.inventory_path    = "provisioning/inventory"
  ansible.limit             = "all"
  ansible.galaxy_role_file  = "provisioning/requirements.yml"
  ansible.galaxy_command    = "ansible-galaxy install --role-file=%{role_file} --roles-path=%{roles_path} --force"
  ansible.extra_vars        = {
    env:             "development",
    app_port:        3000,
    db_password:     "{{ lookup('env', 'DB_PASSWORD') }}",
  }
  ansible.verbose           = "v"
end

GPT-4 generates the basic ansible.playbook line but misses galaxy_role_file and galaxy_command, which means a vagrant provision run fails when the playbook has role dependencies. Claude includes Galaxy role installation as part of the provisioner setup.

Ansible Provisioner for Multi-Machine

For multi-machine setups, Ansible benefits from running against all machines in a single pass rather than separately per machine:

# Run Ansible once against all machines (more efficient than per-machine)
config.vm.define "app-02", primary: false do |m|
  # ... machine config ...

  # Only run provisioning after last machine is defined
  m.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/site.yml"
    ansible.limit    = "all"
    # Vagrant generates inventory automatically for all defined machines
  end
end

This pattern — attaching the Ansible provisioner to the last-defined machine with limit: "all" — lets Ansible handle inter-host dependencies (like app servers knowing the DB IP from inventory) in a single run. Claude generates this correctly when the prompt mentions Ansible and multi-machine. GPT-4 attaches Ansible to each machine individually, losing cross-host coordination.

Task 5: Box Management and Versioning

A frequently missed part of production Vagrantfile setup is explicit box versioning. Claude generates:

config.vm.box              = "ubuntu/jammy64"
config.vm.box_version      = "~> 20240301.0"
config.vm.box_check_update = false  # Disable on CI to avoid unexpected updates

The box_check_update = false is particularly important on CI where unexpected box updates can change behavior mid-sprint. GPT-4 omits box version pinning almost universally, which means a vagrant up run six months later may use a different kernel version than when the Vagrantfile was written.

Comparison Table

Task Claude GPT-4 Copilot
Single-VM with provisioning Excellent — secure defaults Good Good inline
Multi-VM with loops Excellent Good — flat blocks Weak
Dynamic IP wiring between VMs Excellent Moderate No
Provider-specific settings Good Good Good in-context
Provisioning script safety set -euo pipefail by default Sometimes No
NFS config for macOS Correct mount options Basic No
Ansible provisioner Excellent — includes Galaxy Partial No
Box version pinning Explicit with constraint Omits Omits
host_ip security binding Always includes Omits by default Omits

Prompting for Best Results

All three tools produce better output with concrete topology descriptions. Instead of “a development environment,” specify:

Claude handles the full topology in a single prompt reliably. GPT-4 handles it but may need a follow-up prompt to add security details. Copilot works best for completing individual stanzas when you already have the structure scaffolded.

Task 4: Handling Provider-Specific Quirks

A real-world issue: Windows Hyper-V has different networking than VirtualBox.

Prompt: “I’m switching from VirtualBox to Hyper-V on Windows. My Vagrantfile uses NFS shared folders and a private network. Convert it to work on both providers automatically.”

Claude suggests using Vagrant’s provider detection and fallback:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"

  if Vagrant::Util::Platform.windows?
    # Windows + Hyper-V: Use SMB, not NFS
    config.vm.synced_folder ".", "/vagrant", type: "smb",
      smb_username: ENV['USERNAME'],
      smb_password: ENV['PASSWORD']

    # Hyper-V doesn't support private_network directly — use public with IP
    config.vm.network "public_network", ip: "192.168.1.100"
  else
    # Linux/macOS: Use NFS
    config.vm.synced_folder ".", "/vagrant", type: "nfs",
      nfs_udp: false
    config.vm.network "private_network", ip: "192.168.56.10"
  end

  case ARGV[1]
  when /vmware/
    config.vm.provider "vmware_desktop" do |vmware|
      vmware.vmx["memsize"] = "4096"
      vmware.vmx["numvcpus"] = "2"
    end
  when /hyperv/
    config.vm.provider "hyperv" do |hv|
      hv.memory = 4096
      hv.cpus = 2
      hv.vm_integration_services = {
        guest_service_interface: true,
        heartbeat: true,
        key_value_pair_exchange: true,
        shutdown: true,
        time_synchronization: true,
        vss: true
      }
    end
  else
    # Default: VirtualBox
    config.vm.provider "virtualbox" do |vb|
      vb.memory = 4096
      vb.cpus = 2
    end
  end
end

Task 5: Debugging Vagrant Issues with AI

Common failure modes and how AI helps:

Scenario: “Vagrant SSH hangs. The VM boots but SSH won’t connect.”

Claude’s troubleshooting steps:

  1. Check guest additions version mismatch (VirtualBox version → guest additions mismatch)
  2. Verify provisioning completed (check vagrant provision log)
  3. Test connectivity (SSH directly using IP, not vagrant ssh)
# Claude suggests running
vagrant ssh-config
# If this hangs, the VM isn't actually ready. Try:
vagrant halt
vagrant destroy
vagrant up --provider virtualbox

Scenario: “Shared folder mounting fails with permission errors.”

Claude identifies the root cause is often:

# For NFS on macOS
sudo nfsd status
sudo nfsd restart

# For SMB on Windows
net use Z: \\<host-ip>\vagrant /user:<username>

Migration Strategy: From Docker Compose to Vagrant

Teams often ask Claude: “Should we move from Docker Compose to Vagrant?”

Claude explains the tradeoff:

Claude can generate both simultaneously:

# docker-compose.yml for quick dev
version: '3.9'
services:
  app:
    build: .
    ports:
      - "3000:3000"
# Vagrantfile for system-level testing
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"
  config.vm.provision "shell", inline: "apt-get update && apt-get install -y build-essential"
end

Performance Comparison: Cold Starts

When speed matters (CI/CD context), AI helps pick the right tool:

Tool First Boot Subsequent Boots Network Setup Best For
Vagrant + VirtualBox 2-4 minutes 30-60 seconds Automatic Local dev, multi-VM
Vagrant + Hyper-V 1-2 minutes 20-40 seconds Automatic Windows teams
Docker 5-30 seconds 1-2 seconds Manual Stateless, testing
Vagrant + Multipass 30-60 seconds 10-20 seconds Automatic Lightweight Ubuntu

Claude notes: if you’re booting 20 Vagrant VMs in CI, switch to containers. But for local development requiring real Systemd/networking, Vagrant wins.

Real-World Lessons

Lesson 1: Always version lock your box Claude warns: config.vm.box_version prevents silent failures when base images change.

Lesson 2: Provisioning order matters Claude enforces: install packages before writing configs that depend on them.

Lesson 3: Use vagrant-reload plugin for kernel updates When provisioning installs a new kernel, add:

config.vm.provision :shell, inline: "reboot"
config.vm.provision :reload

Built by theluckystrike — More at zovo.one