Best Tools for Remote Team Sprint Velocity
Velocity is the average story points a team completes per sprint. For remote teams, the hard part isn’t the math — it’s getting consistent data when team members are across time zones and stand-ups are async. This guide covers the tools and scripts that make velocity tracking accurate and actionable.
Why Velocity Tracking Breaks for Remote Teams
Common failure modes:
- Tickets moved to “Done” at different times (some at end of sprint, some days later)
- No consistent definition of “complete” so partial work gets counted
- Capacity changes (PTO, time zone overlap reductions) aren’t tracked against velocity
- Velocity charts in Jira/Linear that no one looks at because they don’t account for team size changes
The fix isn’t a better chart — it’s enforcing a definition of done and normalizing velocity for capacity. Both require a small amount of process discipline plus the right API queries.
Tool 1: Linear (Best for Engineering Teams)
Linear’s Cycles feature is the cleanest sprint tracking interface available. Issues have an explicit cycle scope, and the API lets you pull velocity data programmatically.
Query sprint velocity via Linear GraphQL API:
#!/usr/bin/env python3
# linear_velocity.py — print velocity for last 6 cycles
import os
import requests
from collections import defaultdict
LINEAR_API_KEY = os.environ["LINEAR_API_KEY"]
TEAM_ID = os.environ["LINEAR_TEAM_ID"]
def query_linear(query, variables=None):
resp = requests.post(
"https://api.linear.app/graphql",
json={"query": query, "variables": variables or {}},
headers={"Authorization": LINEAR_API_KEY},
)
resp.raise_for_status()
return resp.json()["data"]
cycles_query = """
query($teamId: String!) {
cycles(filter: { team: { id: { eq: $teamId } } }, first: 6, orderBy: updatedAt) {
nodes {
id
name
startsAt
endsAt
completedAt
issues(filter: { completedAt: { gte: "2025-01-01T00:00:00Z" } }) {
nodes {
id
title
estimate
completedAt
state { name type }
assignee { name }
}
}
}
}
}
"""
data = query_linear(cycles_query, {"teamId": TEAM_ID})
cycles = data["cycles"]["nodes"]
print(f"{'Sprint':<20} {'Points':<8} {'Issues':<8} {'Per Engineer'}")
print("-" * 50)
for cycle in cycles:
completed_issues = [
i for i in cycle["issues"]["nodes"]
if i["state"]["type"] == "completed"
]
total_points = sum(i["estimate"] or 0 for i in completed_issues)
engineers = len(set(
i["assignee"]["name"] for i in completed_issues
if i["assignee"]
))
per_eng = round(total_points / engineers, 1) if engineers else 0
print(f"{cycle['name']:<20} {total_points:<8} {len(completed_issues):<8} {per_eng}")
Linear Cycle Burndown Data
Pull burndown data per cycle to understand pace mid-sprint, not just at the end:
#!/usr/bin/env python3
# linear_burndown.py — daily remaining points for current cycle
import os, requests
from datetime import datetime, timedelta
LINEAR_API_KEY = os.environ["LINEAR_API_KEY"]
TEAM_ID = os.environ["LINEAR_TEAM_ID"]
burndown_query = """
query($teamId: String!) {
cycles(
filter: { team: { id: { eq: $teamId } }, isActive: { eq: true } }
first: 1
) {
nodes {
name
startsAt
endsAt
issues {
nodes {
estimate
completedAt
state { type }
}
}
}
}
}
"""
resp = requests.post(
"https://api.linear.app/graphql",
json={"query": burndown_query, "variables": {"teamId": TEAM_ID}},
headers={"Authorization": LINEAR_API_KEY},
)
cycle = resp.json()["data"]["cycles"]["nodes"][0]
issues = cycle["issues"]["nodes"]
total_points = sum(i["estimate"] or 0 for i in issues)
completed_points = sum(
i["estimate"] or 0 for i in issues
if i["state"]["type"] == "completed"
)
remaining = total_points - completed_points
starts = datetime.fromisoformat(cycle["startsAt"].replace("Z", "+00:00"))
ends = datetime.fromisoformat(cycle["endsAt"].replace("Z", "+00:00"))
total_days = (ends - starts).days
elapsed_days = (datetime.now(starts.tzinfo) - starts).days
ideal_remaining = total_points * (1 - elapsed_days / total_days)
print(f"Cycle: {cycle['name']}")
print(f"Total points: {total_points}")
print(f"Completed: {completed_points}")
print(f"Remaining: {remaining} (ideal: {ideal_remaining:.0f})")
print(f"Status: {'On track' if remaining <= ideal_remaining else 'Behind'}")
Tool 2: Jira Cloud (with JQL and Automation)
Jira has more configuration overhead but is mandated at many organizations. Use JQL to extract velocity data cleanly.
JQL for completed sprint work:
project = MYPROJ
AND sprint in closedSprints()
AND sprint = "Sprint 42"
AND status in (Done, Released)
AND issuetype in (Story, Task, Bug)
ORDER BY resolutiondate ASC
Export via Jira REST API:
#!/bin/bash
# jira-velocity.sh — print points completed per sprint
JIRA_URL="https://yourorg.atlassian.net"
JIRA_EMAIL="you@yourcompany.com"
JIRA_TOKEN="$JIRA_API_TOKEN"
PROJECT="MYPROJ"
SPRINTS_BACK=8
auth=$(echo -n "$JIRA_EMAIL:$JIRA_TOKEN" | base64)
# Get sprint IDs
sprint_ids=$(curl -s -H "Authorization: Basic $auth" \
"$JIRA_URL/rest/agile/1.0/board/$(curl -s -H "Authorization: Basic $auth" \
"$JIRA_URL/rest/agile/1.0/board?projectKeyOrId=$PROJECT" \
| jq -r '.values[0].id')/sprint?state=closed&maxResults=$SPRINTS_BACK" \
| jq -r '.values[].id')
for sprint_id in $sprint_ids; do
sprint_name=$(curl -s -H "Authorization: Basic $auth" \
"$JIRA_URL/rest/agile/1.0/sprint/$sprint_id" | jq -r '.name')
points=$(curl -s -H "Authorization: Basic $auth" \
"$JIRA_URL/rest/agile/1.0/sprint/$sprint_id/issue?jql=status%3DDone" \
| jq '[.issues[].fields.story_points // 0] | add // 0')
echo "$sprint_name: $points points"
done
Jira Automation for Sprint Close Reports
Use Jira’s built-in Automation feature (Project Settings > Automation) to generate a sprint summary automatically when a sprint closes:
- Trigger: Sprint completed
- Action: Send Slack message to
#engineering - Message template:
Sprint {{sprint.name}} closed. Completed: {{sprint.completedIssuesCount}} issues. Story points: {{sprint.completedStoryPoints}}.
This avoids the need for custom API scripts for teams that live in Jira’s UI.
Tool 3: GitHub Issues + Custom Script
Teams using GitHub Issues for project tracking can calculate velocity from closed issues in a milestone:
#!/usr/bin/env python3
# github_velocity.py
import os
import requests
GITHUB_TOKEN = os.environ["GITHUB_TOKEN"]
REPO = os.environ["GITHUB_REPO"] # "yourorg/yourrepo"
headers = {
"Authorization": f"Bearer {GITHUB_TOKEN}",
"Accept": "application/vnd.github.v3+json",
}
# Get all closed milestones
milestones = requests.get(
f"https://api.github.com/repos/{REPO}/milestones",
headers=headers,
params={"state": "closed", "per_page": 10, "sort": "due_on", "direction": "desc"},
).json()
print(f"{'Sprint':<30} {'Issues Closed':<15} {'Story Pts (label)'}")
print("-" * 60)
for milestone in milestones[:8]:
issues = requests.get(
f"https://api.github.com/repos/{REPO}/issues",
headers=headers,
params={
"milestone": milestone["number"],
"state": "closed",
"per_page": 100,
},
).json()
# Story points encoded as labels like "points:3", "points:5", "points:8"
total_points = 0
for issue in issues:
for label in issue.get("labels", []):
if label["name"].startswith("points:"):
total_points += int(label["name"].split(":")[1])
break
print(f"{milestone['title']:<30} {len(issues):<15} {total_points}")
Capacity-Adjusted Velocity
Raw velocity is misleading if team size changes. Normalize per engineer-sprint:
# Capacity-adjusted velocity formula
def adjusted_velocity(points_completed, planned_capacity_days, team_size):
"""
Returns points per engineer-day, normalized for capacity.
planned_capacity_days: total working days available across all engineers
e.g., 5-person team × 10-day sprint = 50 engineer-days
with one person on PTO for 3 days = 47 engineer-days
"""
return round(points_completed / planned_capacity_days, 2)
# Example usage
sprint_data = [
{"sprint": "Sprint 40", "points": 42, "capacity_days": 50},
{"sprint": "Sprint 41", "points": 38, "capacity_days": 47}, # PTO
{"sprint": "Sprint 42", "points": 45, "capacity_days": 50},
]
for sprint in sprint_data:
adj = adjusted_velocity(sprint["points"], sprint["capacity_days"], team_size=5)
print(f"{sprint['sprint']}: {sprint['points']} pts | {adj} pts/eng-day")
# Use last 4 sprints average to forecast next sprint
recent = sprint_data[-4:]
avg_adj_velocity = sum(s["points"] / s["capacity_days"] for s in recent) / len(recent)
next_sprint_capacity = 48 # one person has 2 PTO days
forecast = round(avg_adj_velocity * next_sprint_capacity)
print(f"\nForecast for next sprint ({next_sprint_capacity} eng-days): ~{forecast} points")
Tracking Capacity Changes Across Time Zones
For globally distributed teams, available overlap hours matter as much as headcount. A 5-person team with 2 hours of daily overlap has effectively less collaborative capacity than a 4-person co-located team. Track this explicitly:
# capacity_tracker.py — log sprint capacity with overlap hours
import json
from datetime import date
CAPACITY_LOG = "sprint_capacity.json"
def log_sprint_capacity(sprint_name, engineers, pto_days, overlap_hours_per_day):
"""
engineers: list of dicts with name and timezone
pto_days: total PTO days across team this sprint
overlap_hours_per_day: actual synchronous working hours available
"""
try:
with open(CAPACITY_LOG) as f:
log = json.load(f)
except FileNotFoundError:
log = []
entry = {
"sprint": sprint_name,
"date": str(date.today()),
"headcount": len(engineers),
"pto_days": pto_days,
"overlap_hours": overlap_hours_per_day,
"effective_capacity": len(engineers) * 10 - pto_days, # 10-day sprint
}
log.append(entry)
with open(CAPACITY_LOG, "w") as f:
json.dump(log, f, indent=2)
return entry
Posting Weekly Velocity to Slack
#!/bin/bash
# velocity-report.sh — runs after each sprint closes
VELOCITY=$(python3 /opt/scripts/github_velocity.py | tail -1)
SLACK_HOOK="$SLACK_WEBHOOK_URL"
curl -s -X POST "$SLACK_HOOK" \
-H "Content-Type: application/json" \
-d "{
\"text\": \":bar_chart: *Sprint Velocity Report*\n\n${VELOCITY}\n\nFull report: <https://yourcompany.linear.app/cycles|Linear Cycles>\"
}"
Schedule as a GitHub Actions workflow that runs on sprint close (tied to a milestone close event):
on:
milestone:
types: [closed]
jobs:
velocity-report:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: python3 scripts/github_velocity.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPO: ${{ github.repository }}
Async Retrospective Metrics
Velocity alone doesn’t explain why a sprint went well or poorly. Pair it with structured async retrospective data to build a complete picture:
## Sprint 42 Retro Data
**Velocity:** 45 pts (forecast was 48)
**What slowed us:**
- [ ] Auth service PR sat in review 4 days (tag: review_delay)
- [ ] Two unplanned production incidents (tag: incidents)
**What went well:**
- [ ] All planned features shipped
- [ ] Zero regression bugs from QA
**Action items:**
- [ ] Set max 48-hour review SLA for PRs @alice owns rotation
- [ ] Add incident response runbook to reduce investigation time
Store retro notes in a structured format (YAML or JSON in your repo) and query them over time to find patterns:
# Count review_delay tags across last 10 retros
grep -r "review_delay" retros/ | wc -l
When velocity drops, checking 3 retros back usually surfaces the systemic cause. For remote teams, the cause is almost always one of three things: review bottlenecks, unclear acceptance criteria, or unplanned interrupt work eating into planned capacity.
Related Reading
- Best Tools for Remote Team A/B Testing
- Best Tools for Remote Team Post-Mortems
- Best Tools for Remote Team Changelog Review
Built by theluckystrike — More at zovo.one