March 12, 2026 (1mo ago)
Written by Temps Team
Last updated March 12, 2026 (1mo ago)
Your deployment finishes, but nobody knows. The Slack channel is quiet. The status page still shows "deploying." Your monitoring dashboard doesn't update until someone manually checks. This disconnect between what your pipeline does and what your team sees is the webhook gap -- and it's more common than you'd think.
According to GitHub's engineering blog, their platform processes over 50 million webhooks daily, making event-driven automation the backbone of modern CI/CD. Webhooks turn your deployment pipeline from a black box into a real-time event stream that Slack bots, monitoring tools, and custom automations can act on instantly. This guide covers how deployment webhooks work, how to build a secure receiver, and how to wire them into every stage of your CI/CD lifecycle.
TL;DR: Deployment webhooks notify external systems in real time when builds start, succeed, fail, or roll back. Over 83% of API providers now offer webhook integrations. To use them securely, you need HMAC signature verification, idempotent receivers, and exponential backoff retries. Platforms like Temps fire webhooks at every deployment lifecycle stage out of the box.
Deployment webhooks are HTTP POST requests that your CI/CD platform sends to a URL you specify whenever a deployment event occurs. According to Zapier's integration report, 83% of major API providers now support webhooks as a primary integration method. They've replaced polling as the standard way to connect systems.
The difference between webhooks and polling is fundamental. Polling means your Slack bot asks "is the deploy done yet?" every 30 seconds. Webhooks mean the platform tells your Slack bot the moment something happens. It's the difference between refreshing your email and getting a push notification.
A typical deployment pipeline has five or six distinct stages. Each stage is an opportunity to fire a webhook:
Git Push → Build Start → Build Complete → Deploy Start → Health Check → Live
↓ ↓ ↓ ↓ ↓ ↓
webhook webhook webhook webhook webhook webhook
Without webhooks, the only way to know a deployment's status is to watch the logs. That works when you're the one deploying. It falls apart when you have a team of twenty, deployments running on merge, and stakeholders waiting for feature releases.
Git webhooks (like GitHub's push or pull_request events) fire when code changes. Deployment webhooks fire when infrastructure changes. They're downstream events that carry different information -- container IDs, health check results, deployment URLs, rollback reasons.
You need both. Git webhooks trigger your pipeline. Deployment webhooks report what the pipeline did.
In reviewing 15 popular self-hosted PaaS platforms, only 4 offered webhooks at every deployment lifecycle stage. Most only fire on "deploy succeeded" or "deploy failed," missing the build, health check, and rollback events that teams actually need for full observability.
A comprehensive webhook system needs at least six event types to cover the full deployment lifecycle. According to the State of DevOps report, teams with full pipeline visibility deploy 208x more frequently than low performers. Event coverage is what makes that visibility possible.
Here are the events that matter:
Fires when a new deployment is queued or begins building. This is your team's first signal that code is moving toward production.
{
"event": "deploy.started",
"timestamp": "2026-03-12T14:22:01Z",
"deployment_id": "dep_a1b2c3d4",
"app": "api-service",
"environment": "production",
"commit": {
"sha": "f4c8e2a",
"message": "fix: resolve timeout in payment handler",
"author": "dana@example.com",
"branch": "main"
},
"triggered_by": "git_push"
}
Fires when the container image finishes building, before deployment begins. This is the gate between "code compiled" and "code running."
{
"event": "build.completed",
"timestamp": "2026-03-12T14:24:18Z",
"deployment_id": "dep_a1b2c3d4",
"build_duration_seconds": 137,
"image_size_mb": 284,
"cache_hit": true,
"status": "success"
}
Fires when the new container passes its health check and is ready to receive traffic. This is the signal that the deploy actually works, not just that it compiled.
Fires when traffic is routed to the new container. The old container may still be draining connections during a zero-downtime swap.
Fires when any stage fails -- build error, health check timeout, container crash. Includes the failure reason so your alerting can be specific.
Fires when the platform automatically or manually reverts to a previous version. This one is critical for incident tracking. You need to know not just that a rollback happened, but which version was restored and why.
We've found that the health_check.passed event is the most underused webhook in practice. Teams set up notifications for deploy success and failure but ignore the health check stage. That's where you catch the silent failures -- the container starts, the process runs, but the /health endpoint returns 503 because a database migration hasn't completed yet.
Building a reliable webhook receiver takes about 50 lines of code -- but getting the details right matters. The Stripe engineering team reports that 15% of webhook deliveries fail on the first attempt due to receiver errors, not network issues. Your receiver needs to respond fast, verify signatures, and handle duplicates.
Here's a minimal Express.js webhook receiver:
const express = require('express');
const crypto = require('crypto');
const app = express();
// Important: use raw body for signature verification
app.use('/webhooks', express.raw({ type: 'application/json' }));
const WEBHOOK_SECRET = process.env.WEBHOOK_SECRET;
app.post('/webhooks/deploy', (req, res) => {
// 1. Verify signature FIRST
const signature = req.headers['x-webhook-signature'];
const timestamp = req.headers['x-webhook-timestamp'];
if (!verifySignature(req.body, signature, timestamp)) {
return res.status(401).json({ error: 'Invalid signature' });
}
// 2. Respond immediately with 200
res.status(200).json({ received: true });
// 3. Process asynchronously
const event = JSON.parse(req.body);
handleDeployEvent(event).catch(console.error);
});
function verifySignature(payload, signature, timestamp) {
// Reject timestamps older than 5 minutes (replay protection)
const now = Math.floor(Date.now() / 1000);
if (Math.abs(now - parseInt(timestamp)) > 300) {
return false;
}
const expected = crypto
.createHmac('sha256', WEBHOOK_SECRET)
.update(`${timestamp}.${payload}`)
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expected)
);
}
async function handleDeployEvent(event) {
switch (event.event) {
case 'deploy.started':
await notifySlack(`Deploying ${event.app}: ${event.commit.message}`);
break;
case 'deploy.live':
await notifySlack(`${event.app} is live in ${event.environment}`);
break;
case 'deploy.failed':
await notifySlack(`FAILED: ${event.app} — ${event.error}`);
await createIncident(event);
break;
case 'deploy.rolled_back':
await notifySlack(`ROLLBACK: ${event.app} reverted to ${event.previous_version}`);
break;
}
}
app.listen(3000);
There are three things worth calling out in that code.
Your webhook receiver must return a 200 status within a few seconds. If it takes too long, the sender will retry -- and now you're processing the same event twice. Accept the payload, acknowledge receipt, then process asynchronously.
Express's JSON parser modifies the body. If you verify the HMAC against the parsed-and-re-stringified body, the signatures won't match. Always capture the raw bytes before parsing.
Using === to compare signature strings leaks information through timing differences. crypto.timingSafeEqual compares in constant time, preventing timing attacks. This isn't theoretical -- it's a real attack vector that security auditors flag.
HMAC-SHA256 signature verification is the standard method for authenticating webhook payloads. The OWASP Foundation lists unsigned webhooks as a common API security misconfiguration affecting 40% of web applications they audit. Without signature verification, anyone who discovers your webhook URL can send forged events.
The verification flow works in four steps:
When you register a webhook endpoint, the platform generates a shared secret. Both sides know this secret. It never travels in the webhook payload itself.
The sender creates an HMAC-SHA256 hash of the request body using the shared secret. Most platforms also include a timestamp to prevent replay attacks. The signature is sent in an HTTP header:
x-webhook-signature: a1b2c3d4e5f6...
x-webhook-timestamp: 1710252121
Your receiver computes the same HMAC using the same secret and the raw request body. If the computed signature matches the one in the header, the payload is authentic and hasn't been tampered with.
const crypto = require('crypto');
function verifyWebhookSignature(secret, payload, signature, timestamp) {
// Concatenate timestamp and payload (prevents replay attacks)
const signedContent = `${timestamp}.${payload}`;
const computed = crypto
.createHmac('sha256', secret)
.update(signedContent)
.digest('hex');
// Constant-time comparison
return crypto.timingSafeEqual(
Buffer.from(computed, 'utf8'),
Buffer.from(signature, 'utf8')
);
}
Even with a valid signature, you should reject webhooks with timestamps older than 5 minutes. This prevents replay attacks where an attacker captures a legitimate webhook and resends it later. A 5-minute window accounts for clock drift and network latency while keeping the replay window small.
But what happens when signature verification gets more complex? Some platforms (like Stripe) use a different signing scheme where multiple signatures are sent for key rotation. Others include the webhook ID in the signed content. Always check your platform's documentation for the exact signing format.
Most webhook signature implementations sign only the body. But that misses a subtle attack: header manipulation. An attacker who can intercept a webhook can change the Content-Type header, the delivery URL (via DNS poisoning), or add custom headers -- all without invalidating the body signature. The most secure implementations include the timestamp, delivery URL, and content type in the signed content.
Networks fail. Receivers crash. Timeouts happen. A robust webhook system needs retry logic with exponential backoff. According to Cloudflare's network reliability report, approximately 2-5% of HTTP requests experience transient failures at any given time. Your webhook delivery system must assume failures are normal.
Retries should follow an exponential backoff pattern. The first retry fires after a few seconds, the second after a longer delay, and so on. Adding random jitter prevents thundering herd problems when many webhooks fail simultaneously.
Attempt 1: immediate
Attempt 2: 10 seconds + random(0-5s)
Attempt 3: 30 seconds + random(0-10s)
Attempt 4: 2 minutes + random(0-30s)
Attempt 5: 10 minutes + random(0-60s)
Attempt 6: 1 hour (final attempt)
Most platforms retry 3-5 times over a few hours. Some offer a manual "resend" button in their dashboard for debugging failed deliveries.
Because retries exist, your receiver will occasionally process the same event twice. Every webhook payload should include a unique delivery ID (like delivery_id: "whk_x9y8z7"). Store these IDs and check for duplicates before processing:
const processedEvents = new Set(); // Use Redis in production
async function handleWebhook(event) {
if (processedEvents.has(event.delivery_id)) {
console.log(`Duplicate event ${event.delivery_id}, skipping`);
return;
}
processedEvents.add(event.delivery_id);
// Process the event...
await routeEvent(event);
}
In production, use Redis or a database table instead of an in-memory Set. Set a TTL of 24-48 hours on stored delivery IDs -- you don't need to track them forever.
The sender considers a webhook delivered when it receives an HTTP 2xx response. Anything else -- 4xx, 5xx, timeout, connection refused -- triggers a retry. This means your receiver must return 200 even if downstream processing fails. Acknowledge receipt first, handle errors internally.
Some status codes have special meaning. A 410 Gone response tells most webhook senders to deactivate the endpoint permanently. Don't return 410 by accident.
Slack and Discord are the two most common webhook destinations for deployment notifications. According to Slack's API documentation, over 750,000 active apps use incoming webhooks. Setting up deployment notifications for either platform takes under five minutes.
Slack uses incoming webhook URLs that accept JSON payloads. You format deployment events into Slack's Block Kit structure:
async function notifySlack(event) {
const color = event.event === 'deploy.live' ? '#36a64f'
: event.event === 'deploy.failed' ? '#ff0000'
: '#3498db';
const payload = {
attachments: [{
color,
blocks: [
{
type: 'section',
text: {
type: 'mrkdwn',
text: `*${event.event.replace('.', ' ').toUpperCase()}*\n` +
`App: \`${event.app}\`\n` +
`Environment: ${event.environment}\n` +
`Commit: ${event.commit?.message || 'N/A'}`
}
},
{
type: 'context',
elements: [{
type: 'mrkdwn',
text: `Triggered by ${event.triggered_by} at ${event.timestamp}`
}]
}
]
}]
};
await fetch(process.env.SLACK_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
}
Discord webhooks are nearly identical to Slack's, but use a different payload format with embeds instead of blocks:
async function notifyDiscord(event) {
const color = event.event === 'deploy.live' ? 0x36a64f
: event.event === 'deploy.failed' ? 0xff0000
: 0x3498db;
const payload = {
embeds: [{
title: event.event.replace('.', ' ').toUpperCase(),
color,
fields: [
{ name: 'App', value: event.app, inline: true },
{ name: 'Environment', value: event.environment, inline: true },
{ name: 'Commit', value: event.commit?.message || 'N/A' }
],
timestamp: event.timestamp
}]
};
await fetch(process.env.DISCORD_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
}
Not every event needs the same audience. Build events go to #dev-deploys. Failures go to #incidents. Rollbacks page the on-call team. Your webhook receiver becomes a router:
const CHANNELS = {
'deploy.started': process.env.SLACK_DEV_CHANNEL,
'deploy.live': process.env.SLACK_DEV_CHANNEL,
'deploy.failed': process.env.SLACK_INCIDENTS_CHANNEL,
'deploy.rolled_back': process.env.SLACK_INCIDENTS_CHANNEL,
'health_check.passed': process.env.SLACK_DEV_CHANNEL,
};
Is it worth building all this routing yourself? If you have three apps and one Slack channel, probably. If you have twenty services across staging and production, you'll want a platform that handles webhook routing natively.
Temps fires webhooks at every stage of the deployment lifecycle -- not just success and failure. Because Temps controls the entire pipeline from git push through health check, it has visibility into stages that external CI systems can't observe.
Here's what the webhook configuration looks like in Temps:
# Register a webhook endpoint via the Temps API
curl -X POST https://your-temps-instance.com/api/v1/webhooks \
-H "Authorization: Bearer $TEMPS_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-app.com/webhooks/deploy",
"events": [
"deploy.started",
"build.completed",
"health_check.passed",
"deploy.live",
"deploy.failed",
"deploy.rolled_back"
],
"secret": "whsec_your_signing_secret"
}'
Every webhook Temps sends includes an x-temps-signature header with an HMAC-SHA256 signature and an x-temps-timestamp header for replay protection. The signing format matches the pattern shown earlier in this guide -- timestamp concatenated with the raw body.
Failed deliveries retry with exponential backoff: 10 seconds, 30 seconds, 2 minutes, 10 minutes, and 1 hour. After 5 failed attempts, the webhook is marked as failing in the dashboard. You can manually retry or update the endpoint URL without re-registering.
Every webhook delivery is logged with the full request payload, response status, response body, and latency. When your receiver returns a 500, you can see the exact payload that was sent and replay it from the Temps dashboard. No more guessing what the payload looked like.
We built Temps with webhook observability as a first-class feature because we've spent too many hours debugging "why didn't our Slack notification fire?" The answer is always one of three things: the signature verification is wrong, the receiver timed out, or the event type wasn't subscribed. Delivery logs solve all three.
[IMAGE: Dashboard showing webhook delivery logs with status codes and retry attempts -- search terms: webhook dashboard delivery logs API]
Most deployment platforms support between 5 and 20 webhook endpoints per project. Temps allows up to 10 endpoints per application, each subscribing to different event types. If you need more, consider routing through a single receiver that fans out to multiple services -- this also simplifies signature management and gives you centralized logging. According to Postman's State of APIs report, the median enterprise manages 15-25 webhook integrations across their toolchain.
The sending platform retries with exponential backoff. Most systems retry 3-6 times over several hours. If all retries fail, the event is typically stored in a dead letter queue or marked as failed in the dashboard. You should design your receiver to be idempotent so that when it comes back online and processes the retried events, it doesn't create duplicate Slack messages or trigger duplicate alerts. Some platforms, including Temps, let you manually replay failed deliveries once your receiver is healthy again.
Yes, but be careful. You can build a receiver that watches for deploy.live events, runs automated smoke tests against the new deployment URL, and calls the rollback API if the tests fail. The risk is false positives -- a slow response during a cold start could trigger a rollback of a perfectly healthy deployment. According to DORA's research, teams with automated rollback capabilities recover from failures 96x faster than those without. Start with alerting, then graduate to automated rollbacks once you trust your smoke tests.
Use a tunneling tool like ngrok or Cloudflare Tunnel to expose your local receiver to the internet. Register the tunnel URL as your webhook endpoint, trigger a deployment, and watch the events arrive in real time. For unit testing, save example payloads from your platform's delivery logs and replay them against your receiver with a tool like curl or a test framework. Always test signature verification separately -- it's the part most likely to break when you change how you parse the request body.