Remote Work Tools

Secure File Transfer Protocol Setup for Remote Teams Exchanging Large Files

When your remote engineering team needs to exchange large files—database dumps, build artifacts, video assets, or ML model weights—cloud storage services often impose frustrating upload limits and per-file restrictions. Setting up a dedicated secure file transfer protocol server gives your team full control over file transfers with enterprise-grade security, and it costs nothing beyond the server infrastructure you already operate.

This guide walks through configuring SFTP using OpenSSH on Linux, implementing key-based authentication, setting up per-user access controls, and automating large file transfers with scripts your team can integrate into existing workflows.

Why SFTP Over Cloud Services for Large File Exchange

Commercial cloud storage platforms work well for documents and moderate-sized files, but large file transfers hit several walls. Upload limits typically cap single files at 5-15GB. Bandwidth charges accumulate quickly. Sync clients consume local resources. And sharing links requires managing permissions across yet another platform.

SFTP (SSH File Transfer Protocol) solves these problems by giving your team a dedicated server where file size limits are what you set, bandwidth is your own, and authentication integrates with the same SSH keys developers already use for server access. Every transfer is encrypted, authenticated, and logged.

Setting Up the SFTP Server

Most Linux distributions ship with OpenSSH, which includes SFTP support out of the box. If you need to install or verify:

# Ubuntu/Debian
sudo apt update
sudo apt install openssh-server

# RHEL/CentOS
sudo yum install openssh-server

Edit the SSH daemon configuration to enable SFTP with proper access controls:

sudo vim /etc/sftp/sshd_config

Add or modify these settings:

# Enable internal SFTP server (chrooted by default)
Subsystem sftp internal-sftp

# Restrict specific users to SFTP-only access
Match User developer1,developer2,designer1
    ChrootDirectory /sftp/%u
    ForceCommand internal-sftp
    AllowTcpForwarding no
    X11Forwarding no

The ChrootDirectory directive locks each user to their designated directory, preventing them from navigating to system files. Create the directory structure:

sudo mkdir -p /sftp/{developer1,developer2,designer1}/{uploads,downloads}
sudo chown root:root /sftp/*
sudo chown developer1:developer1 /sftp/developer1/{uploads,downloads}

Restart the SSH service to apply changes:

sudo systemctl restart sshd

Configuring Key-Based Authentication

Password-based SFTP access creates security risks and operational friction. Key-based authentication eliminates both while making automation straightforward.

Generate an ED25519 key pair (more secure and faster than RSA):

ssh-keygen -t ed25519 -C "sftp-access@yourcompany.com"

The private key stays on your local machine. The public key gets deployed to the server:

# On the SFTP server, as each user
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
# Paste the public key content into authorized_keys

For teams using multiple keys or rotating access, consider a centralized key management approach using authorized_keys with command restrictions:

command="/usr/lib/openssh/sftp-server" ssh-ed25519 AAAA... user@workstation

This forces SFTP-only access even if someone obtains the key, preventing interactive shell access.

Managing Large File Transfers

SFTP handles large files well, but remote teams benefit from optimized transfer strategies. Here are practical approaches for different scenarios.

Resuming Interrupted Transfers

Network interruptions happen. SFTP supports resume operations natively in most clients:

# Using sftp command
sftp user@sftp.example.com
sftp> get -a large-dump.sql.gz

# Using scp with resume (-C enables compression)
scp -C user@sftp.example.com:/path/to/large-file.tar.gz .

For unreliable connections, lftp provides sophisticated retry logic:

lftp -e "set sftp:connect-timeout 30; set net:reconnect-interval-base 5; \
  mirror --verbose /remote/directory ./local-directory" \
  sftp://user@sftp.example.com

Parallel Transfers for Speed

Single-stream SFTP doesn’t saturate high-bandwidth connections. Use pssh or rsync over SSH to parallelize:

# Split large files into chunks and transfer in parallel
split -b 100M large-video.mp4 video-part-

# Transfer chunks concurrently
parallel -j 4 scp {} user@sftp.example.com:/uploads/ ::: video-part-*

# Or use rsync with bandwidth limiting
rsync -avz --partial --progress \
  --bwlimit=50000 \
  ./local-directory/ user@sftp.example.com:/remote-directory/

Transfer Scripts for Team Workflows

Automate recurring transfers with shell scripts. This example syncs daily build artifacts:

#!/bin/bash
# sync-builds.sh - Run via cron or CI pipeline

SFTP_HOST="sftp.example.com"
SFTP_USER="buildbot"
REMOTE_DIR="/builds/${BUILD_NUMBER}"
LOCAL_DIR="./dist"

# Create remote directory
ssh ${SFTP_USER}@${SFTP_HOST} "mkdir -p ${REMOTE_DIR}"

# Transfer files with progress
scp -r ${LOCAL_DIR}/* ${SFTP_USER}@${SFTP_HOST}:${REMOTE_DIR}/

# Verify transfer
ssh ${SFTP_USER}@${SFTP_HOST} "ls -la ${REMOTE_DIR}"

Schedule it with cron for automated deployments:

# Run every night at 2 AM
0 2 * * * /home/developer/scripts/sync-builds.sh >> /var/log/sftp-sync.log 2>&1

Security Hardening for Production

Beyond basic configuration, apply these hardening measures to protect your SFTP server.

Rate Limiting and Connection Throttling

Protect against brute-force attacks and resource exhaustion:

# In /etc/ssh/sshd_config
MaxAuthTries 3
MaxSessions 10
ClientAliveInterval 300
ClientAliveCountMax 2

Consider fail2ban for automatic IP blocking:

sudo apt install fail2ban

Create /etc/fail2ban/jail.local:

[sshd]
enabled = true
port = sftp
filter = sshd
maxretry = 3
findtime = 300
bantime = 3600

Network Isolation

Bind SFTP to specific interfaces or VPN addresses:

ListenAddress 10.0.1.50

Combine with firewall rules to permit only VPN or corporate IP ranges:

sudo ufw allow from 10.0.0.0/8 to any port 22 proto tcp
sudo ufw enable

Logging and Monitoring

Enable detailed SFTP logging for compliance and troubleshooting:

# In /etc/ssh/sshd_config
SyslogFacility AUTH
LogLevel VERBOSE

Monitor with logwatch or custom scripts:

# Check for unusual activity
grep "Failed password" /var/log/auth.log
grep "session opened" /var/log/auth.log

Disk Quotas

Prevent any single user from filling your storage:

# Install quota tools
sudo apt install quota

# Add to /etc/fstab for the partition
/dev/sda1 /sftp ext4 defaults,usrquota,grpquota 0 2

# Configure user quotas
sudo edquota -u developer1

Set soft and hard limits appropriate to your storage capacity and use cases.

Choosing the Right Transfer Tool

Your team has several client options depending on workflow:

For Python-based automation, here’s a quick example using pysftp:

import pysftp

with pysftp.Connection('sftp.example.com', username='deploy',
                       private_key='/home/user/.ssh/id_ed25519') as sftp:
    sftp.put('/local/build/app.tar.gz', '/remote/builds/app.tar.gz')
    sftp.get('/remote/logs/transfer.log', '/local/logs/transfer.log')

Built by theluckystrike — More at zovo.one