Remote Work Tools

Tailscale turns every device your team uses into a node on a private network, without requiring a central VPN server, NAT traversal rules, or certificate management. Each device gets a stable IP in the 100.64.0.0/10 range, reachable from any other device on the tailnet regardless of what network either is on.

For remote teams, Tailscale replaces the classic VPN setup with something that works in 10 minutes, handles firewall traversal automatically, and scales to hundreds of devices without extra configuration.

Install on All Platforms

# macOS
brew install tailscale
# or download the App Store app (recommended for menu bar integration)

# Linux (Ubuntu/Debian)
curl -fsSL https://tailscale.com/install.sh | sh

# RHEL/CentOS/Amazon Linux
curl -fsSL https://tailscale.com/install.sh | sh

# Raspberry Pi (armv7/arm64)
curl -fsSL https://tailscale.com/install.sh | sh

# Windows
# Download installer from tailscale.com/download

# Docker
docker run -d \
  --name tailscale \
  --cap-add NET_ADMIN \
  --cap-add NET_RAW \
  --device /dev/net/tun \
  -v /var/lib/tailscale:/var/lib/tailscale \
  -e TS_AUTHKEY=tskey-auth-XXXXXX \
  tailscale/tailscale

Authenticate and Start

# Start Tailscale and open browser to authenticate
sudo tailscale up

# Headless authentication (for servers) — generate auth key in admin console
sudo tailscale up --authkey tskey-auth-XXXXXX

# Check status
tailscale status
# Shows all connected devices with IPs, last seen, and OS

# Ping another device
tailscale ping 100.64.0.5
tailscale ping hostname-of-device

# SSH to a device using Tailscale IP
ssh ubuntu@100.64.0.5

# Or with MagicDNS, use the device hostname directly
ssh ubuntu@my-dev-server

Enable MagicDNS and HTTPS

MagicDNS assigns each device a DNS name like device-name.tail1234.ts.net:

  1. Go to Tailscale admin console → DNS
  2. Enable MagicDNS
  3. Enable HTTPS certificates (Let’s Encrypt, automated)

After enabling:

# Access a device by hostname instead of IP
ssh ubuntu@devserver.tail1234.ts.net
curl https://staging-app.tail1234.ts.net

# Get HTTPS cert for a service running on a Tailscale node
tailscale cert staging-app.tail1234.ts.net
# Outputs cert.pem and key.pem in current directory

Configure ACLs (Access Control Lists)

ACLs control which devices can reach which. By default, all devices on a tailnet can reach each other. For teams, restrict access:

// Tailscale ACL policy (JSON with comments)
// Set in admin console  Access Controls

{
  "groups": {
    "group:engineering": ["user:alice@company.com", "user:bob@company.com"],
    "group:devops": ["user:carol@company.com"],
    "group:contractors": ["user:dave@contractco.com"]
  },

  "tagOwners": {
    "tag:prod-server": ["group:devops"],
    "tag:dev-server": ["group:engineering"],
    "tag:exit-node": ["group:devops"]
  },

  "acls": [
    // Engineering can reach dev servers
    {
      "action": "accept",
      "src": ["group:engineering"],
      "dst": ["tag:dev-server:*"]
    },
    // DevOps can reach everything
    {
      "action": "accept",
      "src": ["group:devops"],
      "dst": ["*:*"]
    },
    // Contractors can only reach specific services on dev
    {
      "action": "accept",
      "src": ["group:contractors"],
      "dst": ["tag:dev-server:443", "tag:dev-server:80"]
    },
    // Everyone can reach exit nodes
    {
      "action": "accept",
      "src": ["*"],
      "dst": ["tag:exit-node:*"]
    }
  ],

  "ssh": [
    // Engineering can SSH to dev servers
    {
      "action": "accept",
      "src": ["group:engineering"],
      "dst": ["tag:dev-server"],
      "users": ["ubuntu", "ec2-user"]
    }
  ]
}

Tag a device when authenticating:

# Tag a server as a dev server during auth
sudo tailscale up --authkey tskey-auth-XXXXXX --advertise-tags tag:dev-server

Set Up Subnet Routes

Subnet routes let Tailscale nodes access private subnets — for example, your AWS VPC or office LAN — without installing Tailscale on every machine in the subnet.

# On the gateway/subnet router (a Tailscale node in the subnet)
# Enable IP forwarding first
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

# Advertise the subnet
sudo tailscale up --advertise-routes=10.0.1.0/24,10.0.2.0/24

# In the admin console: approve the advertised routes
# Admin → Machines → select machine → Edit route settings → approve routes

# On client devices: use the route
# By default, clients use approved subnet routes automatically
# Verify:
tailscale status
# Shows subnet routers with the routes they advertise

Exit Nodes

An exit node routes all internet traffic for a device through a Tailscale node — equivalent to a VPN exit point.

# Set up a node as an exit node
sudo tailscale up --advertise-exit-node

# Approve in admin console: Machines → select → Edit route settings → Use as exit node

# On client: use the exit node
sudo tailscale up --exit-node=100.64.0.5
# or by hostname
sudo tailscale up --exit-node=exit-server

# Disable exit node (go back to direct routing)
sudo tailscale up --exit-node=

# Verify your IP is now the exit node's IP
curl https://ipinfo.io

Tailscale SSH (Replace SSH Key Management)

Tailscale SSH uses your Tailscale identity instead of SSH keys. No more distributing authorized_keys files:

# Enable Tailscale SSH on a server
sudo tailscale up --ssh

# The server now accepts SSH from authorized Tailscale users
# No ~/.ssh/authorized_keys needed

# SSH from any authorized device
ssh ubuntu@devserver  # uses Tailscale identity, no key needed

# Check Tailscale SSH audit log in admin console
# Admin → Logs → SSH

Enable Tailscale SSH in ACL policy:

{
  "ssh": [
    {
      "action": "accept",
      "src": ["group:engineering"],
      "dst": ["tag:dev-server"],
      "users": ["autogroup:nonroot", "ubuntu"]
    },
    {
      "action": "check",  // require re-auth for prod
      "src": ["group:devops"],
      "dst": ["tag:prod-server"],
      "users": ["ubuntu"]
    }
  ]
}

Running Tailscale on Servers at Boot

# systemd service is installed automatically via the install script
# Verify it's enabled
sudo systemctl status tailscaled

# Auto-start with auth key (for automated server provisioning)
sudo systemctl enable tailscaled
sudo tailscale up --authkey tskey-auth-XXXXXX --advertise-tags tag:dev-server

# For ephemeral nodes (e.g., CI runners) that deregister when stopped
sudo tailscale up --authkey tskey-auth-XXXXXX --ephemeral

Tailscale vs. Traditional VPN for Remote Teams

Many teams migrate from OpenVPN or WireGuard to Tailscale for good reasons, but understanding the tradeoffs before committing helps you avoid surprises.

Traditional VPNs like OpenVPN require a central gateway server that becomes a single point of failure and a bandwidth bottleneck. Every packet between two remote employees travels through the central server even if both are on fast connections. WireGuard solves some of that but still requires manual key exchange and configuration management at scale.

Tailscale wraps WireGuard in a coordination layer that handles key distribution automatically. When two devices connect, Tailscale negotiates a direct WireGuard tunnel between them whenever network topology allows. If direct connection is blocked by symmetric NAT, traffic relays through Tailscale’s DERP servers (Designated Encrypted Relay for Packets), which are deployed globally across major cloud regions.

For a remote team of 10–50 people, Tailscale’s Teams plan (around $6/user/month) is cost-competitive with running your own VPN infrastructure once you account for EC2 or DigitalOcean instance costs, certificate management, and engineering time for ongoing maintenance. At 10 engineers, that’s $720/year — well below the cost of a single on-call incident caused by a down VPN gateway.

One area where traditional VPNs still win: regulatory environments that require traffic inspection. Tailscale encrypts end-to-end with WireGuard, so a middlebox cannot inspect payloads. If your compliance posture requires deep packet inspection of internal traffic, complement Tailscale with application-layer logging rather than relying on network-layer inspection.

Integrating Tailscale with CI/CD Pipelines

One underused pattern is adding Tailscale to CI runners so they can reach private staging infrastructure without opening firewall ports to the internet.

# GitHub Actions example
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Setup Tailscale
        uses: tailscale/github-action@v2
        with:
          authkey: ${{ secrets.TAILSCALE_AUTHKEY }}
          tags: tag:ci

      - name: Deploy to staging
        run: |
          # staging-server is now reachable via Tailscale
          ssh deploy@staging-server.tail1234.ts.net 'cd /app && ./deploy.sh'

Generate a reusable auth key in the admin console under Settings → Keys. Use ephemeral keys for CI so runners automatically deregister after the job completes. Apply the tag:ci tag so ACLs grant CI runners access only to what they need — staging servers but not production.

The same pattern applies to GitLab CI, CircleCI, and Buildkite. Each runner authenticates with an ephemeral auth key, runs within the tailnet for the job duration, then disappears from the device list automatically.

Troubleshooting Common Issues

Device shows as offline in admin console but is running

Run tailscale status on the device. If it shows connected locally, check whether the machine’s clock is synchronized. Tailscale’s control plane relies on accurate time for key validation. Run sudo systemctl restart systemd-timesyncd or sudo ntpdate -u pool.ntp.org to force a sync.

Subnet routes not working after approval

Client devices need to have route acceptance enabled:

# Check whether the client accepts subnet routes
tailscale status
# Look for "subnet" in the output

# Force route acceptance
sudo tailscale up --accept-routes

Some Linux distributions also need the --accept-routes flag persisted in a systemd unit override. Edit /etc/default/tailscaled and add TS_EXTRA_ARGS=--accept-routes if the flag does not persist across reboots.

DNS resolution fails for MagicDNS names

MagicDNS injects 100.100.100.100 as a DNS resolver. Some Linux distributions override this with their own resolver. Check /etc/resolv.conf and verify 100.100.100.100 is listed. If systemd-resolved is active, run resolvectl status and confirm Tailscale’s interface has the correct DNS configuration.

High latency between two nodes

Run tailscale ping --until-direct <hostname> to check whether the connection is direct or relayed through DERP. If relayed, the most common causes are symmetric NAT on both endpoints (common on mobile carriers and some corporate firewalls) or mismatched UDP port availability. Check that UDP port 41641 is allowed outbound on both firewalls.

Pro Tips for Team Administration

Keep auth keys short-lived. Generate separate auth keys for each device class (workstations, servers, CI) with expirations of 30–90 days. This limits blast radius if a key leaks and forces periodic re-authentication, which is good hygiene regardless of security incidents.

Use device posture checks. Tailscale’s posture checks (available on the Business plan) let you enforce that devices have up-to-date OS versions before ACLs grant access. This is useful for contractor devices you do not manage — you can require a minimum macOS or Windows version before they reach internal resources.

Name devices descriptively before joining the tailnet. The default device name comes from the OS hostname. Set descriptive hostnames like alice-macbook-pro before authenticating — they are much easier to audit in access logs than Alices-MacBook-Pro-2.local or auto-generated cloud instance IDs.

Review the logs regularly. The admin console’s Logs section shows every SSH session and every network flow that Tailscale ACLs evaluated. Run a periodic review — monthly for small teams, weekly for larger ones — to spot unusual access patterns from contractors and service accounts.

Built by theluckystrike — More at zovo.one