Load tests that only run manually before a launch aren’t load tests — they’re launch rituals. Remote teams need load tests in CI that run automatically, produce consistent results, and fail builds when performance degrades. The right tool makes this practical.
k6 (Best for CI Integration)
k6 is the easiest to integrate into CI pipelines. Tests are JavaScript, output is structured JSON, and the CLI has clear exit codes.
Install:
# macOS
brew install k6
# Linux
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
--keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" \
| sudo tee /etc/apt/sources.list.d/k6.list
sudo apt update && sudo apt install k6
# Docker
docker run grafana/k6 run -
Write a test script:
// tests/load/api-baseline.js
import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate, Trend } from 'k6/metrics';
const errorRate = new Rate('errors');
const paymentDuration = new Trend('payment_duration');
export const options = {
stages: [
{ duration: '2m', target: 20 }, // Ramp up to 20 users
{ duration: '5m', target: 20 }, // Stay at 20 users
{ duration: '2m', target: 100 }, // Ramp up to 100 users
{ duration: '5m', target: 100 }, // Stay at 100 users
{ duration: '2m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95th percentile < 500ms
http_req_failed: ['rate<0.01'], // Error rate < 1%
errors: ['rate<0.01'],
},
};
export default function () {
const BASE_URL = __ENV.BASE_URL || 'https://staging.yourcompany.com';
// Test 1: List users
const listRes = http.get(`${BASE_URL}/api/users`, {
headers: { Authorization: `Bearer ${__ENV.API_TOKEN}` },
});
check(listRes, {
'users list status 200': (r) => r.status === 200,
'users list has data': (r) => r.json('data') !== null,
});
errorRate.add(listRes.status !== 200);
sleep(1);
// Test 2: Create payment (heavy endpoint)
const start = Date.now();
const payRes = http.post(
`${BASE_URL}/api/payments`,
JSON.stringify({ amount: 1000, currency: 'usd' }),
{ headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${__ENV.API_TOKEN}` } }
);
paymentDuration.add(Date.now() - start);
check(payRes, {
'payment created': (r) => r.status === 201,
'payment has id': (r) => r.json('id') !== undefined,
});
sleep(2);
}
Run locally:
BASE_URL=https://staging.yourcompany.com \
API_TOKEN=your_token \
k6 run tests/load/api-baseline.js
Run in CI:
# .github/workflows/load-test.yml
name: Load Test
on:
workflow_dispatch:
schedule:
- cron: '0 4 * * 1' # Weekly, Monday 4am
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run k6 load test
uses: grafana/k6-action@v0.3.1
with:
filename: tests/load/api-baseline.js
flags: --out json=results.json
env:
BASE_URL: ${{ vars.STAGING_URL }}
API_TOKEN: ${{ secrets.STAGING_API_TOKEN }}
- name: Upload results
uses: actions/upload-artifact@v4
if: always()
with:
name: load-test-results
path: results.json
Gatling (JVM, Report Generation)
Gatling’s strength is its HTML reports, which are detailed enough to share with stakeholders without further processing.
# Install via SDK Man
sdk install java 21.0.2-tem
curl -s https://get.sdkman.io | bash
sdk install gatling 3.11.0
Scala simulation:
// src/gatling/simulations/ApiSimulation.scala
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class ApiSimulation extends Simulation {
val httpProtocol = http
.baseUrl("https://staging.yourcompany.com")
.acceptHeader("application/json")
.contentTypeHeader("application/json")
.header("Authorization", s"Bearer ${System.getenv("API_TOKEN")}")
val feeder = csv("data/users.csv").random
val userScenario = scenario("User Flow")
.exec(
http("Get Users")
.get("/api/users")
.check(status.is(200))
.check(jsonPath("$.data").exists)
)
.pause(1)
.feed(feeder)
.exec(
http("Get User Profile")
.get("/api/users/${userId}")
.check(status.is(200))
.check(responseTimeInMillis.lte(500))
)
setUp(
userScenario.inject(
rampUsers(50).during(2.minutes),
constantUsersPerSec(20).during(5.minutes)
)
).protocols(httpProtocol)
.assertions(
global.responseTime.percentile3.lt(500),
global.successfulRequests.percent.gt(99)
)
}
# Run and generate report
gatling.sh -s ApiSimulation
# Report at target/gatling/apisimulation-TIMESTAMP/index.html
Locust (Python, Custom Logic)
Locust is ideal when your load test needs complex business logic that’s hard to express in k6 or Gatling.
pip install locust
# locustfile.py
from locust import HttpUser, task, between, events
from locust.exception import StopUser
import random
import json
class APIUser(HttpUser):
wait_time = between(1, 3)
token = None
def on_start(self):
"""Login once per user on start"""
response = self.client.post("/api/auth/login", json={
"email": f"test-{random.randint(1, 1000)}@example.com",
"password": "testpassword"
})
if response.status_code == 200:
self.token = response.json()["token"]
self.client.headers.update({"Authorization": f"Bearer {self.token}"})
else:
raise StopUser()
@task(3)
def browse_products(self):
with self.client.get("/api/products", catch_response=True) as r:
if r.status_code == 200 and len(r.json()["data"]) > 0:
r.success()
else:
r.failure(f"Unexpected: {r.status_code}")
@task(1)
def place_order(self):
self.client.post(
"/api/orders",
json={"product_id": random.randint(1, 100), "quantity": 1},
name="/api/orders [POST]"
)
Run:
# Interactive web UI
locust --host=https://staging.yourcompany.com
# Headless for CI
locust \
--host=https://staging.yourcompany.com \
--users 100 \
--spawn-rate 10 \
--run-time 5m \
--headless \
--csv=results
Artillery (YAML-First)
Artillery uses YAML configs and is quickest to write for simple HTTP scenarios:
npm install -g artillery@latest
# tests/load/artillery.yml
config:
target: https://staging.yourcompany.com
phases:
- duration: 120
arrivalRate: 10
rampTo: 50
name: Warm up
- duration: 300
arrivalRate: 50
name: Sustained load
defaults:
headers:
Authorization: "Bearer {{ $env.API_TOKEN }}"
scenarios:
- name: User journey
weight: 70
flow:
- get:
url: /api/products
expect:
- statusCode: 200
- hasProperty: data
- post:
url: /api/cart
json:
product_id: 42
quantity: 1
expect:
- statusCode: 201
- name: Auth flow
weight: 30
flow:
- post:
url: /api/auth/login
json:
email: "test@example.com"
password: "password"
capture:
- json: $.token
as: authToken
- get:
url: /api/profile
headers:
Authorization: "Bearer {{ authToken }}"
API_TOKEN=your_token artillery run tests/load/artillery.yml --output results.json
artillery report results.json --output results.html
Tool Comparison
| Tool | Language | Best For | CI-Friendly |
|---|---|---|---|
| k6 | JavaScript | General API load testing | Excellent |
| Gatling | Scala/Java | Reports for stakeholders | Good |
| Locust | Python | Complex business logic | Good |
| Artillery | YAML/JS | Quick scenario definition | Good |
For most remote teams: use k6 in CI for automated regression tests, Gatling for pre-release load reports to stakeholders.
Related Reading
- Best Tools for Remote Team API Mocking
- How to Set Up Netdata for Server Monitoring
-
Deploy a secure Element (Matrix) server for pen test