Errors happen in every production deployment. Without error tracking, remote teams find out about bugs from customer support tickets — hours after they started occurring. Error tracking gives you instant notification, stack traces with context, and the ability to track resolution progress asynchronously.
Sentry (Industry Standard)
Sentry is the most widely used error tracking platform. The self-hosted version (Sentry CE) is open source; the SaaS version starts at $26/month.
Self-hosted with Docker:
# docker-compose.yml (abbreviated — use official sentry/self-hosted repo)
version: "3.8"
services:
sentry-web:
image: sentry:latest
environment:
SENTRY_SECRET_KEY: ${SENTRY_SECRET_KEY}
SENTRY_POSTGRES_HOST: postgres
SENTRY_REDIS_HOST: redis
ports:
- "9000:9000"
command: sentry run web
sentry-worker:
image: sentry:latest
environment:
SENTRY_SECRET_KEY: ${SENTRY_SECRET_KEY}
SENTRY_POSTGRES_HOST: postgres
SENTRY_REDIS_HOST: redis
command: sentry run worker
sentry-cron:
image: sentry:latest
command: sentry run cron
For the full self-hosted setup, use the official installer:
git clone https://github.com/getsentry/self-hosted.git
cd self-hosted
./install.sh
docker compose up -d
SDK Integration — Python:
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.celery import CeleryIntegration
from sentry_sdk.integrations.redis import RedisIntegration
sentry_sdk.init(
dsn=os.environ["SENTRY_DSN"],
environment=os.environ.get("ENVIRONMENT", "production"),
release=os.environ.get("GIT_SHA", "unknown"),
traces_sample_rate=0.1, # 10% performance traces
profiles_sample_rate=0.05,
integrations=[
DjangoIntegration(transaction_style="url"),
CeleryIntegration(),
RedisIntegration(),
],
send_default_pii=False, # Don't send emails, IPs
before_send=filter_sensitive_errors,
)
def filter_sensitive_errors(event, hint):
"""Filter out noise before sending to Sentry"""
if 'exception' in event:
exc_type = event['exception']['values'][0].get('type', '')
# Don't track expected errors
if exc_type in ['NotFound', 'PermissionDenied', 'ValidationError']:
return None
return event
SDK Integration — Go:
import (
"github.com/getsentry/sentry-go"
sentryhttp "github.com/getsentry/sentry-go/http"
)
func main() {
sentry.Init(sentry.ClientOptions{
Dsn: os.Getenv("SENTRY_DSN"),
Environment: os.Getenv("ENVIRONMENT"),
Release: os.Getenv("GIT_SHA"),
TracesSampleRate: 0.1,
BeforeSend: func(event *sentry.Event, hint *sentry.EventHint) *sentry.Event {
// Scrub PII before sending
if event.User.Email != "" {
event.User.Email = "[FILTERED]"
}
return event
},
})
defer sentry.Flush(2 * time.Second)
sentryHandler := sentryhttp.New(sentryhttp.Options{
Repanic: true,
})
http.Handle("/", sentryHandler.Handle(myHandler))
}
Adding Custom Context to Sentry Events
Raw stack traces are useful, but the error is much faster to diagnose when you attach business context — which user triggered it, what they were doing, and what state the application was in:
import sentry_sdk
def process_order(order_id: str, user_id: str):
with sentry_sdk.push_scope() as scope:
scope.set_tag("order.id", order_id)
scope.set_tag("user.tier", get_user_tier(user_id))
scope.set_context("order", {
"id": order_id,
"status": "processing",
"items_count": get_order_items_count(order_id),
})
scope.set_user({"id": user_id})
try:
result = run_payment_flow(order_id)
except PaymentError as e:
sentry_sdk.capture_exception(e)
raise
This adds a searchable tag and structured context to every error that occurs inside this function. When you’re debugging at midnight from different time zones, having order.id directly in Sentry reduces the investigation loop from 20 minutes to 2.
GlitchTip (Self-Hosted Sentry Alternative)
GlitchTip is Sentry-compatible (uses the same SDK) but much simpler to self-host — a single Docker container vs Sentry’s ~20 containers.
# docker-compose.yml
version: "3.8"
services:
glitchtip-web:
image: glitchtip/glitchtip:latest
ports:
- "8000:8000"
environment:
DATABASE_URL: postgres://glitchtip:${DB_PASSWORD}@postgres:5432/glitchtip
SECRET_KEY: ${SECRET_KEY}
REDIS_URL: redis://redis:6379/0
DEFAULT_FROM_EMAIL: errors@yourcompany.com
EMAIL_URL: smtp://mail.yourcompany.com:587
CELERY_WORKER_CONCURRENCY: 2
GLITCHTIP_MAX_EVENT_LIFE_DAYS: 90
depends_on:
- postgres
- redis
glitchtip-worker:
image: glitchtip/glitchtip:latest
command: celery -A glitchtip worker --concurrency=2
environment:
DATABASE_URL: postgres://glitchtip:${DB_PASSWORD}@postgres:5432/glitchtip
SECRET_KEY: ${SECRET_KEY}
REDIS_URL: redis://redis:6379/0
glitchtip-beat:
image: glitchtip/glitchtip:latest
command: celery -A glitchtip beat
environment:
DATABASE_URL: postgres://glitchtip:${DB_PASSWORD}@postgres:5432/glitchtip
SECRET_KEY: ${SECRET_KEY}
REDIS_URL: redis://redis:6379/0
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: glitchtip
POSTGRES_USER: glitchtip
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- glitchtip-postgres:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
glitchtip-postgres:
Since GlitchTip uses the Sentry protocol, all Sentry SDKs work unchanged — just point the DSN at your GlitchTip instance.
When to Choose GlitchTip Over Sentry CE
GlitchTip trades Sentry’s performance monitoring and session replay features for dramatically simpler operations. If your team’s primary need is error grouping and alerting — not APM — GlitchTip is easier to run sustainably. A single t3.small with 4 GB RAM is sufficient for teams shipping a few thousand errors per day. Sentry CE needs a minimum of 8 containers and 16 GB RAM on the control node.
Rollbar (SaaS, Notification-Focused)
Rollbar’s strength is its notification routing. It can send to Slack, PagerDuty, GitHub Issues, and Jira simultaneously, with per-project and per-error-type routing rules.
// Node.js SDK
const Rollbar = require('rollbar');
const rollbar = new Rollbar({
accessToken: process.env.ROLLBAR_ACCESS_TOKEN,
captureUncaught: true,
captureUnhandledRejections: true,
environment: process.env.NODE_ENV,
codeVersion: process.env.GIT_SHA,
payload: {
person: {
id: req?.user?.id,
username: req?.user?.email,
},
server: {
host: os.hostname(),
},
},
// Don't report 404s as errors
checkIgnore: function(isUncaught, args, payload) {
return payload?.status === 404;
},
});
// Express middleware
app.use(rollbar.errorHandler());
// Manual error capture with context
try {
await processPayment(paymentData);
} catch (err) {
rollbar.error(err, {
payment_id: paymentData.id,
amount: paymentData.amount,
user_id: user.id,
});
throw err;
}
Source Maps for Frontend Errors
Frontend errors without source maps show minified stack traces that are useless. Upload source maps as part of your CI deployment:
# Sentry CLI for source map upload
npm install -g @sentry/cli
# After building frontend
sentry-cli releases new "$GIT_SHA"
sentry-cli releases files "$GIT_SHA" upload-sourcemaps ./dist \
--url-prefix '~/static/js' \
--rewrite
sentry-cli releases finalize "$GIT_SHA"
sentry-cli releases deploys "$GIT_SHA" new -e production
In your webpack/vite config:
// vite.config.js
import { sentryVitePlugin } from "@sentry/vite-plugin";
export default {
build: {
sourcemap: true,
},
plugins: [
sentryVitePlugin({
authToken: process.env.SENTRY_AUTH_TOKEN,
org: "your-org",
project: "frontend",
}),
],
};
Noise Reduction for Remote Teams
Error tracking is only useful if the team actually looks at it. Reduce noise:
# Python: ignore expected errors
sentry_sdk.init(
dsn=os.environ["SENTRY_DSN"],
ignore_errors=[
"KeyboardInterrupt",
"SystemExit",
"DisconnectedError", # Nginx upstream disconnects
"ConnectionResetError", # Client disconnects
],
)
# Rate limit repeated errors
# In Sentry UI: Project Settings > Inbound Filters:
# - Enable "Filter known browser extensions errors"
# - Enable "Filter localhost errors"
# - Set rate limit per issue: 100/minute
Set up issue assignment rules so errors go to the right team:
# Sentry Ownership Rules (Project Settings > Code Owners)
path:src/payments/* payments-team
path:src/auth/* security-team
url:*/api/v2/* backend-team
tags.logger:frontend frontend-team
Error Budgets for Async Teams
Define an explicit error rate budget per service so the team has a shared threshold for when to stop shipping and address reliability:
#!/usr/bin/env python3
# error_budget.py — report error budget status for Slack
import os, requests
from datetime import datetime, timedelta
SENTRY_TOKEN = os.environ["SENTRY_AUTH_TOKEN"]
ORG = os.environ["SENTRY_ORG"]
PROJECT = os.environ["SENTRY_PROJECT"]
# Error budget: 99.5% success rate = 0.5% allowed errors
ERROR_BUDGET_PERCENT = 0.5
def get_error_rate(hours=24):
end = datetime.utcnow()
start = end - timedelta(hours=hours)
stats = requests.get(
f"https://sentry.io/api/0/projects/{ORG}/{PROJECT}/stats/",
headers={"Authorization": f"Bearer {SENTRY_TOKEN}"},
params={
"since": int(start.timestamp()),
"until": int(end.timestamp()),
"stat": "received",
"resolution": "1h",
},
).json()
total_events = sum(point[1] for point in stats)
return total_events
errors = get_error_rate(hours=24)
print(f"Errors last 24h: {errors}")
print(f"Budget status: review if errors exceed threshold for your traffic volume")
Post the budget status to Slack on a daily schedule so the whole team sees it during async standups, not just the engineer who happened to check Sentry that morning.
Alerting Routing for Distributed On-Call
Error tracking tools are only as useful as their alerting configuration. For remote teams across time zones, poor routing means the wrong person gets paged at 3am for an issue outside their domain.
Configure routing rules in Sentry to send alerts based on the code path, not just the project:
# Sentry alert configuration via sentry-cli or Terraform
# Route payment errors to payments team Slack channel
# Route auth errors to security team PagerDuty
# Route everything else to a general #errors channel with low priority
Use PagerDuty or Opsgenie to implement time-zone-aware on-call rotations. Set up escalation policies so an alert that goes unacknowledged for 15 minutes automatically escalates to the next person in the rotation, regardless of time zone.
For teams that don’t want full PagerDuty overhead, Sentry’s built-in alert rules with Slack routing plus a simple on-call schedule posted in your team handbook is sufficient for most sub-20-person teams:
# Sentry alert rules (configurable in Project Settings > Alerts)
# Rule 1: Critical errors — any new issue with >10 occurrences/hour
# Action: Notify #alerts-critical in Slack + email on-call engineer
#
# Rule 2: Error spike — error rate increases 50% vs. last hour
# Action: Notify #alerts-ops in Slack
#
# Rule 3: New error in production — any first-seen error
# Action: Notify #errors-review in Slack (low priority, review async)
Keep the critical alert channel genuinely critical. If it fires more than 3 times per week on non-critical issues, the team will start ignoring it — the classic alert fatigue failure mode.
Tool Comparison
| Tool | Hosting | Cost | Best For |
|---|---|---|---|
| Sentry CE | Self-hosted | Free | Full control, data residency |
| GlitchTip | Self-hosted | Free | Sentry-compatible, simple ops |
| Sentry SaaS | Cloud | From $26/mo | Managed, teams of 5-50 |
| Rollbar | Cloud | From $12/mo | Notification routing |
| Honeybadger | Cloud | From $25/mo | Small teams, uptime + errors |
Related Reading
- How to Set Up Vector for Log Processing
- How to Set Up Fluentd for Log Collection
- How to Create Automated Status Pages
- Best Bug Tracking Setup for a 7-Person Remote QA Team
Related Articles
- Productivity Tracking Tools for Remote Teams 2026
- Best Bug Tracking Tools for Remote QA Teams
- Remote Employee Performance Tracking Tool Comparison for Dis
- How to Track Remote Team Use Rate Without Invasive
- Best Time Tracking Tools for Remote Freelancers
Built by theluckystrike — More at zovo.one