Privacy Tools Guide

WireGuard Performance Tuning for Large File Transfer Optimization Guide

Optimize WireGuard throughput for large file transfers by tuning MTU to 1400-1450 bytes to avoid fragmentation from WireGuard’s UDP overhead, increasing kernel buffer sizes (net.core.rmem_max, net.core.wmem_max to 256MB+), and enabling TSO/GSO (TCP Segmentation Offload) for hardware acceleration of ChaCha20-Poly1305 encryption. Monitor actual throughput with iperf3 through the tunnel to identify bottlenecks, and configure TCP_NODELAY on application sockets to prevent buffering delays. WireGuard’s modern cryptography and improved design provide excellent throughput, but default kernel settings assume interactive traffic—bulk transfers require explicit tuning to reach gigabit speeds.

Understanding WireGuard Performance Characteristics

WireGuard uses modern cryptographic primitives and an improved codebase, typically achieving throughput that exceeds older VPN protocols. However, several factors can limit performance:

MTU Configuration for Large Transfers

One of the most impactful optimizations involves proper MTU tuning. WireGuard operates at the IP layer, encapsulating packets within UDP. This adds overhead that can cause fragmentation if not properly accounted for.

Calculating Optimal MTU

The standard Ethernet MTU is 1500 bytes. WireGuard adds its own overhead:

For a typical configuration, setting MTU to 1420 provides a safe margin:

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.0.0.1/24
MTU = 1420
PrivateKey = <server-private-key>

[Peer]
PublicKey = <client-public-key>
AllowedIPs = 10.0.0.2/32
PersistentKeepalive = 25

For paths with known lower MTU networks (such as tunneled connections), reduce further:

# For paths through additional VPN tunnels or satellite links
MTU = 1280

Test your MTU with ping tests using the Don’t Fragment flag:

# Test maximum unfragmented packet size
ping -M do -s 1400 -c 3 10.0.0.2

Kernel Parameter Tuning

Linux kernel parameters significantly impact WireGuard throughput. Apply these settings in /etc/sysctl.conf or via sysctl commands:

Network Buffer Sizes

# Increase network buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 4194304
net.core.wmem_default = 4194304
net.core.netdev_max_backlog = 50000
net.core.optmem_max = 25165824

TCP Window Scaling

# Enable TCP window scaling for improved throughput
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

Apply Changes

# Apply without reboot
sudo sysctl -p

# Verify current values
sysctl net.core.rmem_max

WireGuard-Specific Performance Options

Inline Configuration for Reduced Overhead

While not strictly a performance feature, keeping configuration inline eliminates additional file I/O:

# Single comprehensive configuration file
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = <key>
MTU = 1420
SaveConfig = false

[Peer]
# Multiple peers for redundancy
PublicKey = <peer1-key>
AllowedIPs = 10.0.0.2/32, 192.168.100.0/24
Endpoint = peer1.example.com:51820
PersistentKeepalive = 25

[Peer]
PublicKey = <peer2-key>
AllowedIPs = 10.0.0.3/32
Endpoint = peer2.example.com:51820

PersistentKeepalive Considerations

For large file transfers, the PersistentKeepalive setting affects NAT/firewall state maintenance:

# Recommended for NATed environments
PersistentKeepalive = 25

# Can be disabled for server-to-server with static IPs
# PersistentKeepalive = 0

CPU and Encryption Optimization

WireGuard encryption performance varies across CPU architectures. Modern processors with AES-NI or ARM cryptographic extensions provide hardware acceleration benefits, though ChaCha20-Poly1305 is designed to be efficient on all platforms.

Multi-Core Utilization

WireGuard runs in kernel space (with the wireguard module) and benefits from multi-core systems. Ensure your system uses the kernel module rather than the userspace implementation:

# Check if kernel module is loaded
lsmod | grep wireguard

# Load module if needed
sudo modprobe wireguard

# Verify kernel module is active
sudo wg show

CPU Frequency Scaling

For consistent performance, disable CPU frequency scaling during bulk transfers:

# Set CPU to performance mode
sudo cpupower frequency-set -g performance

# Or use the kernel command line for always-on performance
# Add: processor.max_cstates=1

Network Path Optimization

Using Alternate Ports

Some networks throttle WireGuard’s default UDP port 51820. Consider alternate ports:

# Using non-standard port
ListenPort = 51821

# Or common alternative ports
ListenPort = 443 # May work behind restrictive firewalls

Bonding Multiple Connections

For maximum throughput between two points, consider link bonding with LACP or similar:

# Create bonded interface
sudo ip link add bond0 type bond mode 802.3ad
sudo ip link set eth0 master bond0
sudo ip link set eth1 master bond0

# Run WireGuard over bonded link

Benchmarking Your Setup

After applying optimizations, verify improvements with standardized tests:

iperf3 Testing

# Server side
iperf3 -s -p 5201

# Client side
iperf3 -c 10.0.0.1 -p 5201 -t 30 -i 1

File Transfer Testing

# Using scp or rsync over WireGuard
time rsync -avz --progress /large/file/path 10.0.0.2:/destination/

# Using netcat for raw throughput
nc -l -p 9000 > /dev/null &
nc 10.0.0.2 9000 < /dev/zero | pv -r > /dev/null

Practical Deployment Example

A production-ready configuration for file transfer servers:

# Server configuration (wg0.conf)
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = <server-private-key>
MTU = 1420

# Performance tuning via post-up/post-down
PostUp = sysctl -w net.core.rmem_max=16777216 net.core.wmem_max=16777216
PostUp = sysctl -w net.core.netdev_max_backlog=50000

[Peer]
PublicKey = <client-public-key>
AllowedIPs = 10.0.0.2/32
PersistentKeepalive = 25

Apply the same kernel tuning to both endpoints for consistent performance.

Monitoring and Maintenance

Track performance over time with monitoring:

# Check current throughput
sudo wg show wg0 transfer

# Monitor with iftop
sudo iftop -i wg0

# Log performance metrics
sudo watch -n1 'sudo wg show wg0'

Built by theluckystrike — More at zovo.one