Remote Work Tools

How to Set Up MinIO for Artifact Storage

Every CI/CD pipeline produces artifacts: binaries, test reports, Docker layers, Terraform plans, ML model checkpoints. Pushing these to S3 adds latency and egress costs. MinIO gives you an S3-compatible object store that runs on your own hardware, uses the same AWS SDK calls, and costs nothing per-request.


Single-Node Install with Docker

mkdir -p /data/minio

docker run -d \
  --name minio \
  --restart unless-stopped \
  -p 9000:9000 \
  -p 9001:9001 \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD="$(openssl rand -base64 32)" \
  -e MINIO_VOLUMES="/data" \
  -v /data/minio:/data \
  quay.io/minio/minio server /data --console-address ":9001"

Browse the console at http://your-host:9001. The API is on port 9000.


Multi-Node Setup with Docker Compose

version: "3.8"

x-minio-common: &minio-common
  image: quay.io/minio/minio:latest
  command: server http://minio{1...4}/data --console-address ":9001"
  environment:
    MINIO_ROOT_USER: ${MINIO_ROOT_USER}
    MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
  restart: unless-stopped

services:
  minio1:
    <<: *minio-common
    hostname: minio1
    volumes:
      - /mnt/disk1/data:/data

  minio2:
    <<: *minio-common
    hostname: minio2
    volumes:
      - /mnt/disk2/data:/data

  minio3:
    <<: *minio-common
    hostname: minio3
    volumes:
      - /mnt/disk3/data:/data

  minio4:
    <<: *minio-common
    hostname: minio4
    volumes:
      - /mnt/disk4/data:/data

Configure with the MinIO Client (mc)

curl -LO https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc && sudo mv mc /usr/local/bin/

mc alias set artifacts http://minio.internal:9000 minioadmin "your-password"

mc mb artifacts/ci-build-outputs
mc mb artifacts/terraform-state
mc mb artifacts/ml-datasets
mc mb artifacts/docker-cache

mc ls artifacts/

Bucket Policies and Access Control

cat > ci-policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::ci-build-outputs",
        "arn:aws:s3:::ci-build-outputs/*"
      ]
    }
  ]
}
EOF

mc admin policy create artifacts ci-write ci-policy.json
mc admin user add artifacts ci-runner "$(openssl rand -base64 24)"
mc admin policy attach artifacts ci-write --user ci-runner
mc admin user svcacct add artifacts ci-runner

Using MinIO from CI/CD as an S3 Drop-In

GitHub Actions:

- name: Upload build artifact to MinIO
  env:
    AWS_ACCESS_KEY_ID: ${{ secrets.MINIO_ACCESS_KEY }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.MINIO_SECRET_KEY }}
    AWS_ENDPOINT_URL: https://minio.internal:9000
    AWS_DEFAULT_REGION: us-east-1
  run: |
    aws s3 cp ./dist/myapp.tar.gz \
      s3://ci-build-outputs/${{ github.sha }}/myapp.tar.gz \
      --endpoint-url $AWS_ENDPOINT_URL

Python upload with presigned URL:

import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://minio.internal:9000",
    aws_access_key_id="your-access-key",
    aws_secret_access_key="your-secret-key",
    region_name="us-east-1",
)

s3.upload_file(
    "test-results.xml",
    "ci-build-outputs",
    f"tests/{commit_sha}/results.xml",
)

url = s3.generate_presigned_url(
    "get_object",
    Params={"Bucket": "ci-build-outputs", "Key": f"tests/{commit_sha}/results.xml"},
    ExpiresIn=3600,
)
print(f"Test report: {url}")

Lifecycle Rules for Automatic Cleanup

mc ilm rule add \
  --expiry-days 30 \
  artifacts/ci-build-outputs

mc ilm rule add \
  --expired-object-delete-marker \
  --noncurrent-expire-days 3 \
  artifacts/ci-build-outputs

mc ilm rule ls artifacts/ci-build-outputs

TLS with Let’s Encrypt

mkdir -p /data/minio/.minio/certs
cp /etc/letsencrypt/live/minio.yourcompany.com/fullchain.pem \
   /data/minio/.minio/certs/public.crt
cp /etc/letsencrypt/live/minio.yourcompany.com/privkey.pem \
   /data/minio/.minio/certs/private.key
chown -R minio:minio /data/minio/.minio/certs

docker restart minio

Monitoring MinIO

MinIO exposes Prometheus metrics at /minio/v2/metrics/cluster:

scrape_configs:
  - job_name: minio
    metrics_path: /minio/v2/metrics/cluster
    scheme: https
    static_configs:
      - targets: ["minio.internal:9000"]
    bearer_token: "your-prometheus-token"

Key alerts: minio_cluster_capacity_usable_free_bytes < 10GB, minio_s3_requests_errors_total.


Storing Terraform State in MinIO

MinIO works as a Terraform remote state backend using the S3-compatible protocol. This centralizes state for distributed teams without paying AWS S3 fees.

Create a dedicated bucket and lock it down:

mc mb artifacts/terraform-state
mc anonymous set none artifacts/terraform-state

# Create a terraform-specific user
cat > tf-policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": ["s3:GetObject","s3:PutObject","s3:DeleteObject","s3:ListBucket"],
    "Resource": ["arn:aws:s3:::terraform-state","arn:aws:s3:::terraform-state/*"]
  }]
}
EOF
mc admin policy create artifacts terraform-state tf-policy.json
mc admin user add artifacts terraform "$(openssl rand -base64 24)"
mc admin policy attach artifacts terraform-state --user terraform
mc admin user svcacct add artifacts terraform
# Save the generated access key and secret key

Configure Terraform to use MinIO as the S3 backend:

# backend.tf
terraform {
  backend "s3" {
    bucket                      = "terraform-state"
    key                         = "prod/vpc/terraform.tfstate"
    region                      = "us-east-1"
    endpoint                    = "https://minio.internal:9000"
    access_key                  = "your-access-key"
    secret_key                  = "your-secret-key"
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_region_validation      = true
    force_path_style            = true
  }
}

State locking requires a DynamoDB-compatible service. MinIO does not provide this natively — use a PostgreSQL backend for locking, or use Terrakube/Atlantis which manages locking at the application layer.

For teams using Terragrunt, set the backend values in a root terragrunt.hcl:

remote_state {
  backend = "s3"
  config = {
    bucket           = "terraform-state"
    key              = "${path_relative_to_include()}/terraform.tfstate"
    region           = "us-east-1"
    endpoint         = get_env("MINIO_ENDPOINT", "https://minio.internal:9000")
    access_key       = get_env("MINIO_ACCESS_KEY")
    secret_key       = get_env("MINIO_SECRET_KEY")
    force_path_style = true
    skip_credentials_validation = true
  }
}

Replication for Multi-Region Teams

If your team spans multiple offices or regions, MinIO’s site replication keeps artifact buckets synchronized so developers pull artifacts from a local node rather than a distant primary.

# Set up site replication between two MinIO instances
# Both instances must have the same admin credentials
mc admin replicate add \
  artifacts http://minio-us.internal:9000 \
  http://minio-eu.internal:9000

# Verify replication status
mc admin replicate status artifacts

# Watch replication lag
mc admin replicate resync status artifacts http://minio-eu.internal:9000

Site replication synchronizes IAM policies, users, groups, and bucket contents. Buckets are bi-directional: a CI job in the US region pushes a build artifact; the EU developer’s restore script pulls the same artifact from their local node with low latency.

For unidirectional replication (primary → secondary, read-only mirror), use bucket-level replication instead:

mc replicate add \
  artifacts/ci-build-outputs \
  --remote-bucket artifacts-mirror/ci-build-outputs \
  --remote-url http://minio-eu.internal:9000 \
  --access-key mirror-key --secret-key mirror-secret \
  --priority 1

Built by theluckystrike — More at zovo.one