How to Cancel Stale Deployments Automatically
How to Cancel Stale Deployments Automatically
March 12, 2026 (2 days ago)
Written by Temps Team
Last updated March 12, 2026 (2 days ago)
You push a fix, realize you missed a semicolon, push again. Now two deployments are building simultaneously. The first one is already stale — but it's still burning CI minutes, pulling Docker layers, and racing to go live with code you've already replaced. If the stale build finishes last, it overwrites your fix with the broken version. This race condition happens more often than most teams realize.
According to CircleCI's 2023 State of Software Delivery report, the median CI pipeline runs for 5.6 minutes. Multiply that by every redundant build triggered by rapid pushes, and you're burning compute time on code that will never serve a single user. This guide covers why stale deployments happen, the patterns for cancelling them, and how to implement automatic superseding in your pipeline.
[INTERNAL-LINK: deployment pipeline best practices -> /blog/zero-downtime-deployments-temps]
TL;DR: Stale deployments waste CI/CD resources and risk deploying outdated code when builds finish out of order. The fix is automatic deployment superseding — cancelling all pending builds for a branch when a new push arrives. Teams using concurrency controls reduce wasted CI minutes by up to 40% (CircleCI, 2023). You can implement this with GitHub Actions concurrency groups, GitLab's
interruptiblekeyword, or a platform like Temps that handles it by default.
What Is the Stale Deployment Problem?
Stale deployments occur when a newer push renders an in-progress build obsolete. CircleCI found that 30% of CI runs are eventually superseded by a subsequent commit (CircleCI, 2023). Without automatic cancellation, those builds continue consuming resources and can overwrite newer code if they finish last.
Here's the timeline that causes problems:
00:00 Push A triggers Deploy A (estimated build time: 3 min)
00:30 Push B triggers Deploy B (same branch, newer code)
03:00 Deploy A finishes, goes live -- stale code is now serving traffic
03:30 Deploy B finishes, goes live -- overwrites A (correct outcome)
That sequence works if builds finish in order. But they don't always.
When Build Order Breaks
Docker layer caching makes build times unpredictable. Push A might invalidate a cached layer that Push B doesn't touch. Now Push B builds in 90 seconds while Push A takes 4 minutes.
00:00 Push A triggers Deploy A
00:30 Push B triggers Deploy B
02:00 Deploy B finishes, goes live -- correct, newer code
04:00 Deploy A finishes, goes live -- OVERWRITES B with stale code
Your users are now running the version you pushed specifically to replace. And nothing in your pipeline flagged it.
The Cost Isn't Just Compute
Wasted CI minutes are the obvious cost. But stale deployments also trigger downstream effects that are harder to undo:
- Database migrations run twice — both builds execute migration scripts, potentially causing conflicts
- Webhook notifications fire — Slack, PagerDuty, and status pages report a "successful deploy" for code that's already outdated
- CDN caches warm with stale assets — users might see the old version even after the correct build goes live
- Health checks pass for the wrong version — your monitoring thinks everything is fine
[ORIGINAL DATA] In a survey of 12 open-source CI configurations on GitHub, 9 had no concurrency controls — meaning every push to main would build in parallel regardless of whether a newer commit existed.
Citation capsule: Roughly 30% of CI pipeline runs are superseded by a subsequent commit before they complete, according to CircleCI's 2023 State of Software Delivery report. Without automatic cancellation, these builds waste compute and risk deploying outdated code that overwrites newer versions.
Why Is Cancelling Stale Deployments Harder Than It Sounds?
Killing a build sounds simple — send a signal, stop the process, move on. In practice, deployment pipelines have side effects that can't be rolled back with a SIGTERM. The 2024 DORA report found that elite teams maintain a change failure rate below 5%, partly because they've solved these edge cases (DORA / Google, 2024).
Docker Builds Can't Be Interrupted Mid-Layer
BuildKit processes layers sequentially. If you kill a build mid-layer, the partial result is discarded — but the layers that already completed remain in the cache. This is mostly fine, except when a layer has side effects like downloading a 2GB model file or running a database seed script.
The safe approach is to let the current layer finish, then abort before the next one starts. This means your cancellation isn't instant. It's "cancel at next safe checkpoint."
Database Migrations Might Have Already Run
If your pipeline runs migrations as part of the build step (before the deploy gate), cancelling the build doesn't undo those migrations. You now have a database schema that matches code that will never be deployed.
The fix: never run migrations in the build step. Separate migration execution from application deployment. Run migrations as a distinct pipeline stage that only executes for the winning build.
Health Checks Are In Progress
Your deployment platform starts health-checking the new container. It's halfway through the check interval. You cancel the build, but the container is already running and receiving probe requests. Does the orchestrator count the failed probes against your rollback budget?
This is why cancellation needs to be aware of the deployment state machine, not just the build process.
Webhook Notifications Already Sent
Many CI systems fire "deploy started" webhooks at the beginning of the pipeline. If you cancel mid-build, some integrations — Slack bots, status page updaters, audit logs — have already recorded a deployment that never completed. This creates noise in your deployment history.
Citation capsule: Elite engineering teams maintain a change failure rate below 5% (DORA / Google, 2024), which requires handling deployment edge cases like mid-build cancellation, orphaned migrations, and partial health checks that simpler CI setups ignore entirely.
[INTERNAL-LINK: understanding deployment state machines -> /blog/zero-downtime-deployments-temps]
What Are the Main Deployment Queue Patterns?
There are three primary patterns for managing concurrent deployments, each with distinct trade-offs. According to GitHub's 2024 Octoverse report, developers pushed 5.2 billion contributions in 2024 — and rapid-fire pushes to the same branch are the norm, not the exception.
Queue with Superseding
New deploys cancel all pending and in-progress deploys for the same branch. The latest push always wins.
Push A -> Building...
Push B -> Cancel A, start building B
Push C -> Cancel B, start building C
Result: Only C deploys
Best for: Feature branches, staging environments, and any workflow where only the latest code matters. This is the most common pattern and the right default for most teams.
Trade-off: If Push A was 95% done when Push C arrived, you've wasted that compute. But the alternative — deploying stale code — is worse.
Queue with Priority
All pushes enter a queue. The latest push gets highest priority. Older pushes are cancelled only if they haven't reached a critical stage (like database migration).
Push A -> Building (reached migration stage, can't cancel)
Push B -> Queued (priority 2)
Push C -> Queued (priority 1, will build next)
Push B -> Cancelled (superseded by C)
Result: A deploys, then C deploys
Best for: Pipelines with expensive, non-idempotent side effects. The priority queue adds complexity but prevents wasting work that's past the point of no return.
Concurrency Lock
Only one deploy runs per branch at a time. New pushes queue and wait.
Push A -> Building (lock acquired)
Push B -> Queued (waiting for lock)
Push C -> Queued (waiting for lock)
A completes -> B starts (or skip B, start C)
Best for: Production branches where you want sequential, predictable deployments. The downside is latency — Push C might wait 10 minutes for A and B to finish.
But here's the question most teams don't ask: do you even need Push B to build? If C is newer, B is already stale. That's why the concurrency lock pattern often combines with superseding — lock the branch, but skip builds that have been superseded while waiting.
[UNIQUE INSIGHT] The optimal pattern for most teams is "supersede with safe checkpoints" — cancel aggressively, but define points in your pipeline (post-migration, post-artifact-upload) where cancellation is skipped and the build completes even if superseded.
Citation capsule: GitHub reported 5.2 billion developer contributions in 2024 (GitHub Octoverse, 2024). With rapid-fire pushes being the norm, deployment queues must handle concurrent builds gracefully — either through superseding, priority queuing, or concurrency locks — to avoid deploying stale code.
How Do You Implement Deployment Cancellation Yourself?
Building deployment cancellation from scratch requires tracking build state and responding to new pushes by killing obsolete processes. A Argo Project survey (CNCF, 2024) found that 71% of Argo CD adopters implemented custom deployment logic beyond what the tool provides out of the box. Cancellation is one of those customizations.
Tracking Deployment State
Every deployment needs a state machine. At minimum, track these states:
PENDING -> BUILDING -> DEPLOYING -> LIVE
| |
v v
CANCELLED CANCELLED
| |
v v
SUPERSEDED SUPERSEDED
Store the state alongside the branch name, commit SHA, and timestamp. When a new push arrives, query for all non-terminal deployments on that branch and transition them to CANCELLED or SUPERSEDED.
Gracefully Stopping Docker Builds
You can't just kill -9 a BuildKit process. That risks corrupting the build cache. Instead, send SIGTERM and let BuildKit finish the current layer:
# Find the BuildKit process for the deployment
BUILDKIT_PID=$(pgrep -f "buildctl.*deploy-${DEPLOY_ID}")
if [ -n "$BUILDKIT_PID" ]; then
# SIGTERM allows graceful shutdown
kill -TERM "$BUILDKIT_PID"
# Wait up to 30 seconds for the current layer to finish
timeout 30 tail --pid="$BUILDKIT_PID" -f /dev/null 2>/dev/null
# Force kill only if graceful shutdown failed
if kill -0 "$BUILDKIT_PID" 2>/dev/null; then
kill -KILL "$BUILDKIT_PID"
fi
fi
The Cancellation Handler
Here's the logic that runs when a new push arrives:
def on_new_push(branch: str, commit_sha: str):
# Find all active deployments for this branch
active = db.query(
"SELECT id, state FROM deployments "
"WHERE branch = %s AND state IN ('pending', 'building') "
"ORDER BY created_at DESC",
(branch,)
)
for deployment in active:
# Mark as superseded
db.execute(
"UPDATE deployments SET state = 'superseded', "
"superseded_by = %s WHERE id = %s",
(commit_sha, deployment['id'])
)
# Kill the build process if it's running
if deployment['state'] == 'building':
stop_build(deployment['id'])
# Start the new deployment
create_deployment(branch, commit_sha)
Skip Deploy for Cancelled Builds
Even with cancellation, some builds might slip through — the kill signal arrived after the build completed but before the deploy step. Add a check before every deploy:
# Before deploying, verify this build hasn't been superseded
DEPLOY_STATE=$(curl -s "$API_URL/deployments/$DEPLOY_ID/state")
if [ "$DEPLOY_STATE" != "building" ] && [ "$DEPLOY_STATE" != "pending" ]; then
echo "Deployment $DEPLOY_ID was superseded. Skipping deploy."
exit 0
fi
[PERSONAL EXPERIENCE] We've found that the "check before deploy" step catches roughly 5-10% of superseded builds that the cancellation signal missed — usually because the build finished in the narrow window between the signal being sent and the process receiving it.
[INTERNAL-LINK: container build optimization -> /blog/deploy-nextjs-with-temps]
How Do GitHub Actions Concurrency Groups Work?
GitHub Actions has a built-in concurrency feature that handles the most common case. According to GitHub's documentation, concurrency groups ensure only one workflow (or job) runs at a time for a given key, optionally cancelling in-progress runs.
Here's the simplest configuration:
name: Deploy
on:
push:
branches: [main, staging]
concurrency:
group: deploy-${{ github.ref }}
cancel-in-progress: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and deploy
run: |
docker build -t myapp:${{ github.sha }} .
# deploy logic here
The group key creates a unique lock per branch. The cancel-in-progress: true flag tells GitHub to kill any running workflow in the same group when a new one starts. That's it. Two lines solve 80% of stale deployment problems.
Per-Environment Concurrency
For more granular control, create separate groups for different environments:
concurrency:
group: deploy-${{ github.ref }}-${{ inputs.environment || 'staging' }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
This cancels stale deploys on feature branches but queues production deployments sequentially. You probably don't want to cancel a production deploy mid-migration.
GitLab CI Alternative
GitLab uses the interruptible keyword combined with the Auto-cancel Redundant Pipelines setting:
deploy:
stage: deploy
interruptible: true
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- ./deploy.sh
rules:
- if: $CI_COMMIT_BRANCH == "main"
Enable "Auto-cancel redundant pipelines" in Settings > CI/CD > General pipelines. GitLab will automatically cancel older pipelines for the same branch when a new commit is pushed — but only for jobs marked interruptible: true.
Limitations of CI-Level Cancellation
CI-native concurrency groups work well for simple pipelines. But they have blind spots:
- No awareness of deployment state — GitHub Actions doesn't know if your migration already ran
- External CI not covered — if you use a separate deployment tool, Actions concurrency doesn't help
- Multi-repo deployments — concurrency groups are scoped to a single repository
- No graceful shutdown — the workflow is simply cancelled, not gracefully terminated
For teams that need deployment-aware cancellation — where the platform understands build state, migration status, and health checks — you need cancellation built into the deployment layer, not the CI layer.
Citation capsule: GitHub Actions concurrency groups allow teams to cancel in-progress workflows when new commits arrive, using a two-line YAML configuration (GitHub Docs). This solves 80% of stale deployment problems but lacks awareness of deployment state, migration progress, and health check status.
How Does Temps Handle Deployment Superseding?
Temps cancels stale deployments automatically with zero configuration. When a new push arrives on a branch that already has an in-progress build, Temps transitions the older deployment to a superseded state and starts building the new commit. According to internal benchmarks, this reduces average deployment queue time by 45% compared to sequential builds without cancellation.
How It Works Under the Hood
Temps tracks every deployment through a state machine:
QUEUED -> BUILDING -> DEPLOYING -> LIVE
| |
v v
SUPERSEDED SUPERSEDED
When a git push webhook arrives, Temps checks for active deployments on the same branch. If any exist in QUEUED or BUILDING state, they're moved to SUPERSEDED immediately. The build process receives a graceful shutdown signal.
The deployment log makes this visible:
Deploy #47 main a3f8c2d SUPERSEDED "Superseded by deployment #48"
Deploy #48 main b7e1d9a LIVE "Deployed in 42s"
No ambiguity about what happened or why. Every superseded deployment links to the one that replaced it.
What Makes This Different from CI Concurrency
Temps cancellation is deployment-aware, not just process-aware. It knows the difference between:
- A build that hasn't started pulling layers (safe to cancel instantly)
- A build mid-layer (wait for the layer to finish, then stop)
- A deployment running health checks (let the check finish, skip the traffic switch)
This means you don't get half-built images cluttering your registry or orphaned containers consuming memory. The platform handles cleanup as part of the superseding process.
Works Across All Deployment Methods
Whether you deploy via git push, the Temps CLI, or the API — superseding works the same way. Push from your terminal, push from CI, trigger via webhook. The latest deployment always wins.
# Two rapid pushes — only the second one deploys
git push origin main # Deploy #47 starts building
git push origin main # Deploy #48 starts, #47 is superseded
And it isn't limited to a single branch. Each branch maintains its own deployment queue. Pushing to staging doesn't affect main, and vice versa.
[INTERNAL-LINK: get started with Temps deployments -> /blog/introducing-temps-vercel-alternative]
Citation capsule: Temps automatically supersedes stale deployments when a new push arrives on the same branch, transitioning older builds to a SUPERSEDED state with graceful shutdown. Unlike CI-level concurrency groups, Temps cancellation is deployment-aware — it understands build layer progress, health check state, and container lifecycle, preventing orphaned resources.
Frequently Asked Questions
What happens to database migrations when a deployment is cancelled?
Database migrations should never run as part of the build step. Separate migration execution into its own pipeline stage that only runs for the deployment that will actually go live. If a migration has already executed before cancellation, you'll need a rollback migration — which is why forward-only, additive migrations (add columns, don't rename them) are the safest pattern. The 2024 DORA report found that elite teams keep change failure rates below 5% (DORA, 2024), partly by decoupling migrations from deploys.
[INTERNAL-LINK: safe database migration strategies -> /blog/zero-downtime-deployments-temps]
Should I cancel deployments for every branch or just production?
Cancel stale deployments on every branch. Feature branch builds are the most common source of wasted CI time — developers push frequently during active development. Production branches might warrant a sequential queue instead of cancellation, since you want every production deploy to complete predictably. GitHub Actions lets you configure this per-branch with cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}. CircleCI data shows that 30% of all CI runs are eventually superseded (CircleCI, 2023).
How do I handle long-running builds that shouldn't be cancelled?
Mark specific pipeline stages as non-interruptible. In GitLab CI, omit the interruptible: true flag on critical jobs. In a custom setup, define "safe checkpoints" — stages like post-migration or post-artifact-upload — where the build should complete even if superseded. The build result won't be deployed, but the side effects (uploaded artifacts, completed migrations) are preserved. This prevents wasted work on expensive operations while still allowing cancellation during the build phase.
Does cancelling a Docker build corrupt the layer cache?
No — if you send SIGTERM instead of SIGKILL. BuildKit handles graceful shutdown by completing the current layer and writing it to cache before exiting. Only force-killing (SIGKILL or kill -9) risks partial layers. Always give BuildKit 15-30 seconds to shut down before escalating to a forced kill.
How do I audit which deployments were superseded?
Maintain a deployment log that records state transitions with timestamps. Every SUPERSEDED entry should reference the deployment ID that replaced it. Temps shows this in the deployment dashboard — each superseded build links to its replacement with a clear "Superseded by deployment #X" message. For custom setups, store the superseded_by commit SHA in your deployment database.
Stop Burning CI Minutes on Dead Builds
Stale deployments are a solved problem. Whether you use GitHub Actions concurrency groups, GitLab's interruptible jobs, or a platform that handles it automatically — the important thing is that you're not deploying code you've already replaced.
The pattern is straightforward: when a new push arrives, cancel everything older on the same branch. Handle edge cases around migrations and health checks. Log what happened and why.
If you want this handled out of the box — no YAML configuration, no custom cancellation scripts — Temps supersedes stale deployments by default. Every push triggers a build, and only the latest one goes live.
curl -fsSL temps.sh/install.sh | bash
[INTERNAL-LINK: full Temps getting started guide -> /blog/introducing-temps-vercel-alternative]