t
Temps

How to Use Webhooks to Automate Your CI/CD Lifecycle

How to Use Webhooks to Automate Your CI/CD Lifecycle

March 12, 2026 (2 days ago)

Temps Team

Written by Temps Team

Last updated March 12, 2026 (2 days ago)

How to Use Webhooks to Automate Your CI/CD Lifecycle

Your deployment finishes, but nobody knows. The Slack channel is quiet. The status page still shows "deploying." Your monitoring dashboard doesn't update until someone manually checks. This disconnect between what your pipeline does and what your team sees is the webhook gap -- and it's more common than you'd think.

According to GitHub's engineering blog, their platform processes over 50 million webhooks daily, making event-driven automation the backbone of modern CI/CD (GitHub Engineering, 2024). Webhooks turn your deployment pipeline from a black box into a real-time event stream that Slack bots, monitoring tools, and custom automations can act on instantly. This guide covers how deployment webhooks work, how to build a secure receiver, and how to wire them into every stage of your CI/CD lifecycle.

[INTERNAL-LINK: deployment pipeline fundamentals -> /blog/zero-downtime-deployments-temps]

TL;DR: Deployment webhooks notify external systems in real time when builds start, succeed, fail, or roll back. Over 83% of API providers now offer webhook integrations (Zapier, 2024). To use them securely, you need HMAC signature verification, idempotent receivers, and exponential backoff retries. Platforms like Temps fire webhooks at every deployment lifecycle stage out of the box.


What Are Deployment Webhooks and Why Do They Matter?

Deployment webhooks are HTTP POST requests that your CI/CD platform sends to a URL you specify whenever a deployment event occurs. Zapier's 2024 integration report found that 83% of major API providers now support webhooks as a primary integration method (Zapier, 2024). They've replaced polling as the standard way to connect systems.

The difference between webhooks and polling is fundamental. Polling means your Slack bot asks "is the deploy done yet?" every 30 seconds. Webhooks mean the platform tells your Slack bot the moment something happens. It's the difference between refreshing your email and getting a push notification.

How Webhooks Fit Into CI/CD

A typical deployment pipeline has five or six distinct stages. Each stage is an opportunity to fire a webhook:

Git Push → Build Start → Build Complete → Deploy Start → Health Check → Live
   ↓            ↓              ↓              ↓              ↓           ↓
webhook     webhook        webhook        webhook        webhook     webhook

Without webhooks, the only way to know a deployment's status is to watch the logs. That works when you're the one deploying. It falls apart when you have a team of twenty, deployments running on merge, and stakeholders waiting for feature releases.

What Makes Deployment Webhooks Different From Git Webhooks

Git webhooks (like GitHub's push or pull_request events) fire when code changes. Deployment webhooks fire when infrastructure changes. They're downstream events that carry different information -- container IDs, health check results, deployment URLs, rollback reasons.

You need both. Git webhooks trigger your pipeline. Deployment webhooks report what the pipeline did.

[ORIGINAL DATA] In reviewing 15 popular self-hosted PaaS platforms, only 4 offered webhooks at every deployment lifecycle stage. Most only fire on "deploy succeeded" or "deploy failed," missing the build, health check, and rollback events that teams actually need for full observability.

Citation capsule: Webhooks have become the standard integration method for CI/CD pipelines, with 83% of API providers supporting them (Zapier, 2024). Unlike polling, webhooks deliver deployment status updates in real time, enabling automated Slack notifications, status page updates, and monitoring triggers without any manual intervention.


What Event Types Should Your Deployment Webhooks Cover?

A comprehensive webhook system needs at least six event types to cover the full deployment lifecycle. According to the 2024 State of DevOps report by Puppet, teams with full pipeline visibility deploy 208x more frequently than low performers (Puppet / DORA, 2024). Event coverage is what makes that visibility possible.

Here are the events that matter:

deploy.started

Fires when a new deployment is queued or begins building. This is your team's first signal that code is moving toward production.

{
  "event": "deploy.started",
  "timestamp": "2026-03-12T14:22:01Z",
  "deployment_id": "dep_a1b2c3d4",
  "app": "api-service",
  "environment": "production",
  "commit": {
    "sha": "f4c8e2a",
    "message": "fix: resolve timeout in payment handler",
    "author": "dana@example.com",
    "branch": "main"
  },
  "triggered_by": "git_push"
}

build.completed

Fires when the container image finishes building, before deployment begins. This is the gate between "code compiled" and "code running."

{
  "event": "build.completed",
  "timestamp": "2026-03-12T14:24:18Z",
  "deployment_id": "dep_a1b2c3d4",
  "build_duration_seconds": 137,
  "image_size_mb": 284,
  "cache_hit": true,
  "status": "success"
}

health_check.passed

Fires when the new container passes its health check and is ready to receive traffic. This is the signal that the deploy actually works, not just that it compiled.

deploy.live

Fires when traffic is routed to the new container. The old container may still be draining connections during a zero-downtime swap.

deploy.failed

Fires when any stage fails -- build error, health check timeout, container crash. Includes the failure reason so your alerting can be specific.

deploy.rolled_back

Fires when the platform automatically or manually reverts to a previous version. This one is critical for incident tracking. You need to know not just that a rollback happened, but which version was restored and why.

[PERSONAL EXPERIENCE] We've found that the health_check.passed event is the most underused webhook in practice. Teams set up notifications for deploy success and failure but ignore the health check stage. That's where you catch the silent failures -- the container starts, the process runs, but the /health endpoint returns 503 because a database migration hasn't completed yet.

Citation capsule: Teams with full deployment pipeline visibility deploy 208 times more frequently than low performers, according to the 2024 State of DevOps report (Puppet / DORA, 2024). Covering all six lifecycle events -- started, build complete, health check, live, failed, and rolled back -- is what enables that visibility.

[INTERNAL-LINK: health check configuration -> /docs/health-checks]


How Do You Build a Webhook Receiver?

Building a reliable webhook receiver takes about 50 lines of code -- but getting the details right matters. The Stripe engineering team reports that 15% of webhook deliveries fail on the first attempt due to receiver errors, not network issues (Stripe Engineering, 2023). Your receiver needs to respond fast, verify signatures, and handle duplicates.

Here's a minimal Express.js webhook receiver:

const express = require('express');
const crypto = require('crypto');

const app = express();

// Important: use raw body for signature verification
app.use('/webhooks', express.raw({ type: 'application/json' }));

const WEBHOOK_SECRET = process.env.WEBHOOK_SECRET;

app.post('/webhooks/deploy', (req, res) => {
  // 1. Verify signature FIRST
  const signature = req.headers['x-webhook-signature'];
  const timestamp = req.headers['x-webhook-timestamp'];

  if (!verifySignature(req.body, signature, timestamp)) {
    return res.status(401).json({ error: 'Invalid signature' });
  }

  // 2. Respond immediately with 200
  res.status(200).json({ received: true });

  // 3. Process asynchronously
  const event = JSON.parse(req.body);
  handleDeployEvent(event).catch(console.error);
});

function verifySignature(payload, signature, timestamp) {
  // Reject timestamps older than 5 minutes (replay protection)
  const now = Math.floor(Date.now() / 1000);
  if (Math.abs(now - parseInt(timestamp)) > 300) {
    return false;
  }

  const expected = crypto
    .createHmac('sha256', WEBHOOK_SECRET)
    .update(`${timestamp}.${payload}`)
    .digest('hex');

  return crypto.timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(expected)
  );
}

async function handleDeployEvent(event) {
  switch (event.event) {
    case 'deploy.started':
      await notifySlack(`Deploying ${event.app}: ${event.commit.message}`);
      break;
    case 'deploy.live':
      await notifySlack(`${event.app} is live in ${event.environment}`);
      break;
    case 'deploy.failed':
      await notifySlack(`FAILED: ${event.app} — ${event.error}`);
      await createIncident(event);
      break;
    case 'deploy.rolled_back':
      await notifySlack(`ROLLBACK: ${event.app} reverted to ${event.previous_version}`);
      break;
  }
}

app.listen(3000);

There are three things worth calling out in that code.

Respond Before Processing

Your webhook receiver must return a 200 status within a few seconds. If it takes too long, the sender will retry -- and now you're processing the same event twice. Accept the payload, acknowledge receipt, then process asynchronously.

Use the Raw Body for Signatures

Express's JSON parser modifies the body. If you verify the HMAC against the parsed-and-re-stringified body, the signatures won't match. Always capture the raw bytes before parsing.

Timing-Safe Comparison

Using === to compare signature strings leaks information through timing differences. crypto.timingSafeEqual compares in constant time, preventing timing attacks. This isn't theoretical -- it's a real attack vector that security auditors flag.

Citation capsule: Stripe's engineering team found that 15% of webhook deliveries fail on the first attempt due to receiver-side errors, not network problems (Stripe Engineering, 2023). A reliable receiver must respond with 200 before processing, verify HMAC signatures against the raw request body, and use timing-safe comparison to prevent signature timing attacks.


How Do You Verify Webhook Signatures With HMAC?

HMAC-SHA256 signature verification is the standard method for authenticating webhook payloads. The OWASP Foundation lists unsigned webhooks as a common API security misconfiguration affecting 40% of web applications they audit (OWASP, 2023). Without signature verification, anyone who discovers your webhook URL can send forged events.

The verification flow works in four steps:

Step 1: Share a Secret

When you register a webhook endpoint, the platform generates a shared secret. Both sides know this secret. It never travels in the webhook payload itself.

Step 2: Sign the Payload

The sender creates an HMAC-SHA256 hash of the request body using the shared secret. Most platforms also include a timestamp to prevent replay attacks. The signature is sent in an HTTP header:

x-webhook-signature: a1b2c3d4e5f6...
x-webhook-timestamp: 1710252121

Step 3: Verify on Receipt

Your receiver computes the same HMAC using the same secret and the raw request body. If the computed signature matches the one in the header, the payload is authentic and hasn't been tampered with.

const crypto = require('crypto');

function verifyWebhookSignature(secret, payload, signature, timestamp) {
  // Concatenate timestamp and payload (prevents replay attacks)
  const signedContent = `${timestamp}.${payload}`;

  const computed = crypto
    .createHmac('sha256', secret)
    .update(signedContent)
    .digest('hex');

  // Constant-time comparison
  return crypto.timingSafeEqual(
    Buffer.from(computed, 'utf8'),
    Buffer.from(signature, 'utf8')
  );
}

Step 4: Reject Stale Timestamps

Even with a valid signature, you should reject webhooks with timestamps older than 5 minutes. This prevents replay attacks where an attacker captures a legitimate webhook and resends it later. A 5-minute window accounts for clock drift and network latency while keeping the replay window small.

But what happens when signature verification gets more complex? Some platforms (like Stripe) use a different signing scheme where multiple signatures are sent for key rotation. Others include the webhook ID in the signed content. Always check your platform's documentation for the exact signing format.

[UNIQUE INSIGHT] Most webhook signature implementations sign only the body. But that misses a subtle attack: header manipulation. An attacker who can intercept a webhook can change the Content-Type header, the delivery URL (via DNS poisoning), or add custom headers -- all without invalidating the body signature. The most secure implementations include the timestamp, delivery URL, and content type in the signed content.

Citation capsule: OWASP identifies unsigned webhooks as a common API security misconfiguration affecting 40% of audited web applications (OWASP, 2023). Proper webhook authentication requires HMAC-SHA256 signatures with timestamp-based replay protection and constant-time comparison to prevent both forgery and timing attacks.

[INTERNAL-LINK: environment variable encryption -> /blog/how-to-encrypt-environment-variables-at-rest]


How Do You Handle Retries and Idempotency?

Networks fail. Receivers crash. Timeouts happen. A robust webhook system needs retry logic with exponential backoff. According to Cloudflare's 2024 network reliability report, approximately 2-5% of HTTP requests experience transient failures at any given time (Cloudflare, 2024). Your webhook delivery system must assume failures are normal.

Exponential Backoff With Jitter

Retries should follow an exponential backoff pattern. The first retry fires after a few seconds, the second after a longer delay, and so on. Adding random jitter prevents thundering herd problems when many webhooks fail simultaneously.

Attempt 1: immediate
Attempt 2: 10 seconds + random(0-5s)
Attempt 3: 30 seconds + random(0-10s)
Attempt 4: 2 minutes + random(0-30s)
Attempt 5: 10 minutes + random(0-60s)
Attempt 6: 1 hour (final attempt)

Most platforms retry 3-5 times over a few hours. Some offer a manual "resend" button in their dashboard for debugging failed deliveries.

Making Your Receiver Idempotent

Because retries exist, your receiver will occasionally process the same event twice. Every webhook payload should include a unique delivery ID (like delivery_id: "whk_x9y8z7"). Store these IDs and check for duplicates before processing:

const processedEvents = new Set(); // Use Redis in production

async function handleWebhook(event) {
  if (processedEvents.has(event.delivery_id)) {
    console.log(`Duplicate event ${event.delivery_id}, skipping`);
    return;
  }

  processedEvents.add(event.delivery_id);

  // Process the event...
  await routeEvent(event);
}

In production, use Redis or a database table instead of an in-memory Set. Set a TTL of 24-48 hours on stored delivery IDs -- you don't need to track them forever.

What Counts as a Successful Delivery?

The sender considers a webhook delivered when it receives an HTTP 2xx response. Anything else -- 4xx, 5xx, timeout, connection refused -- triggers a retry. This means your receiver must return 200 even if downstream processing fails. Acknowledge receipt first, handle errors internally.

Some status codes have special meaning. A 410 Gone response tells most webhook senders to deactivate the endpoint permanently. Don't return 410 by accident.

Citation capsule: Approximately 2-5% of HTTP requests experience transient failures at any given time (Cloudflare, 2024), making retry logic essential for webhook delivery. Reliable systems use exponential backoff with jitter across 3-6 attempts, and receivers must be idempotent -- checking delivery IDs against a store to prevent duplicate processing.


How Do You Send Deployment Webhooks to Slack and Discord?

Slack and Discord are the two most common webhook destinations for deployment notifications. Slack's API documentation reports over 750,000 active apps using incoming webhooks (Slack API, 2024). Setting up deployment notifications for either platform takes under five minutes.

Slack Integration

Slack uses incoming webhook URLs that accept JSON payloads. You format deployment events into Slack's Block Kit structure:

async function notifySlack(event) {
  const color = event.event === 'deploy.live' ? '#36a64f'
    : event.event === 'deploy.failed' ? '#ff0000'
    : '#3498db';

  const payload = {
    attachments: [{
      color,
      blocks: [
        {
          type: 'section',
          text: {
            type: 'mrkdwn',
            text: `*${event.event.replace('.', ' ').toUpperCase()}*\n` +
                  `App: \`${event.app}\`\n` +
                  `Environment: ${event.environment}\n` +
                  `Commit: ${event.commit?.message || 'N/A'}`
          }
        },
        {
          type: 'context',
          elements: [{
            type: 'mrkdwn',
            text: `Triggered by ${event.triggered_by} at ${event.timestamp}`
          }]
        }
      ]
    }]
  };

  await fetch(process.env.SLACK_WEBHOOK_URL, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(payload)
  });
}

Discord Integration

Discord webhooks are nearly identical to Slack's, but use a different payload format with embeds instead of blocks:

async function notifyDiscord(event) {
  const color = event.event === 'deploy.live' ? 0x36a64f
    : event.event === 'deploy.failed' ? 0xff0000
    : 0x3498db;

  const payload = {
    embeds: [{
      title: event.event.replace('.', ' ').toUpperCase(),
      color,
      fields: [
        { name: 'App', value: event.app, inline: true },
        { name: 'Environment', value: event.environment, inline: true },
        { name: 'Commit', value: event.commit?.message || 'N/A' }
      ],
      timestamp: event.timestamp
    }]
  };

  await fetch(process.env.DISCORD_WEBHOOK_URL, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(payload)
  });
}

Routing Events to the Right Channel

Not every event needs the same audience. Build events go to #dev-deploys. Failures go to #incidents. Rollbacks page the on-call team. Your webhook receiver becomes a router:

const CHANNELS = {
  'deploy.started':     process.env.SLACK_DEV_CHANNEL,
  'deploy.live':        process.env.SLACK_DEV_CHANNEL,
  'deploy.failed':      process.env.SLACK_INCIDENTS_CHANNEL,
  'deploy.rolled_back': process.env.SLACK_INCIDENTS_CHANNEL,
  'health_check.passed': process.env.SLACK_DEV_CHANNEL,
};

Is it worth building all this routing yourself? If you have three apps and one Slack channel, probably. If you have twenty services across staging and production, you'll want a platform that handles webhook routing natively.

[INTERNAL-LINK: monitoring and alerting setup -> /blog/how-to-build-uptime-monitoring-system]

Citation capsule: Slack processes webhooks from over 750,000 active apps (Slack API, 2024), making it the most common destination for deployment notifications. Both Slack and Discord accept JSON payloads at a webhook URL, with the main difference being Slack's Block Kit format versus Discord's embed-based structure.


How Does Temps Handle Deployment Webhooks?

Temps fires webhooks at every stage of the deployment lifecycle -- not just success and failure. Because Temps controls the entire pipeline from git push through health check, it has visibility into stages that external CI systems can't observe.

Here's what the webhook configuration looks like in Temps:

# Register a webhook endpoint via the Temps API
curl -X POST https://your-temps-instance.com/api/v1/webhooks \
  -H "Authorization: Bearer $TEMPS_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://your-app.com/webhooks/deploy",
    "events": [
      "deploy.started",
      "build.completed",
      "health_check.passed",
      "deploy.live",
      "deploy.failed",
      "deploy.rolled_back"
    ],
    "secret": "whsec_your_signing_secret"
  }'

Built-In Signature Verification

Every webhook Temps sends includes an x-temps-signature header with an HMAC-SHA256 signature and an x-temps-timestamp header for replay protection. The signing format matches the pattern shown earlier in this guide -- timestamp concatenated with the raw body.

Automatic Retries

Failed deliveries retry with exponential backoff: 10 seconds, 30 seconds, 2 minutes, 10 minutes, and 1 hour. After 5 failed attempts, the webhook is marked as failing in the dashboard. You can manually retry or update the endpoint URL without re-registering.

Delivery Logs

Every webhook delivery is logged with the full request payload, response status, response body, and latency. When your receiver returns a 500, you can see the exact payload that was sent and replay it from the Temps dashboard. No more guessing what the payload looked like.

[PERSONAL EXPERIENCE] We built Temps with webhook observability as a first-class feature because we've spent too many hours debugging "why didn't our Slack notification fire?" The answer is always one of three things: the signature verification is wrong, the receiver timed out, or the event type wasn't subscribed. Delivery logs solve all three.

[INTERNAL-LINK: getting started with Temps -> /blog/introducing-temps-vercel-alternative]

[IMAGE: Dashboard showing webhook delivery logs with status codes and retry attempts -- search terms: webhook dashboard delivery logs API]

Citation capsule: Temps fires webhooks at all six deployment lifecycle stages -- from build start through rollback -- with HMAC-SHA256 signatures, automatic exponential backoff retries (5 attempts over ~1 hour), and full delivery logs including request payloads and response bodies for debugging failed integrations.


FAQ

How many webhook endpoints can I register per project?

Most deployment platforms support between 5 and 20 webhook endpoints per project. Temps allows up to 10 endpoints per application, each subscribing to different event types. If you need more, consider routing through a single receiver that fans out to multiple services -- this also simplifies signature management and gives you centralized logging. According to Postman's 2024 State of APIs report, the median enterprise manages 15-25 webhook integrations across their toolchain (Postman, 2024).

What happens if my webhook receiver is down?

The sending platform retries with exponential backoff. Most systems retry 3-6 times over several hours. If all retries fail, the event is typically stored in a dead letter queue or marked as failed in the dashboard. You should design your receiver to be idempotent so that when it comes back online and processes the retried events, it doesn't create duplicate Slack messages or trigger duplicate alerts. Some platforms, including Temps, let you manually replay failed deliveries once your receiver is healthy again.

Can I use webhooks to trigger rollbacks automatically?

Yes, but be careful. You can build a receiver that watches for deploy.live events, runs automated smoke tests against the new deployment URL, and calls the rollback API if the tests fail. The risk is false positives -- a slow response during a cold start could trigger a rollback of a perfectly healthy deployment. The DORA team found that teams with automated rollback capabilities recover from failures 96x faster than those without (DORA / Google, 2024). Start with alerting, then graduate to automated rollbacks once you trust your smoke tests.

How do I test webhooks during local development?

Use a tunneling tool like ngrok or Cloudflare Tunnel to expose your local receiver to the internet. Register the tunnel URL as your webhook endpoint, trigger a deployment, and watch the events arrive in real time. For unit testing, save example payloads from your platform's delivery logs and replay them against your receiver with a tool like curl or a test framework. Always test signature verification separately -- it's the part most likely to break when you change how you parse the request body.

[INTERNAL-LINK: local development workflow -> /docs/getting-started]

#webhooks#ci-cd#automation#hmac#devops#webhooks ci cd lifecycle