t
Temps

How Git-Push Deployments Work Under the Hood

How Git-Push Deployments Work Under the Hood

March 18, 2026 (2 days ago)

David Viejo

Written by David Viejo

Last updated March 18, 2026 (2 days ago)

git push deploys to production. It's the workflow that Heroku popularized, Vercel polished, and dozens of tools since have copied. But most developers who use it every day don't know what's happening between the push and the live URL. Understanding the pipeline helps you debug it when it breaks and make smarter decisions about your deployment setup.

Here's what happens at each step.

TL;DR: Every git-push deployment follows the same 7-step pipeline: webhook, build queue, clone, container build, health check, traffic swap, cleanup. The differences between platforms (Vercel, Coolify, Dokploy, self-hosted) are in speed, traffic swap correctness, and how much you configure manually. When a deploy fails, it's almost always the build, the health check, or the proxy.


Step 1: The Webhook Fires

When you push to a repository on GitHub, GitLab, or Gitea, the platform sends an HTTP POST to any webhooks registered for that repo. The payload includes the commit SHA, the branch name, the repository URL, and the pusher's identity.

Your deployment platform registers one of these webhooks when you connect a repository. On Vercel, it happens automatically when you import a project. On Coolify, Dokploy, or a self-hosted tool, you configure it from the dashboard during project setup.

The webhook request is just an HTTP POST. Your deployment server needs a public IP and an open port to receive it. This is why deployment platforms need to be accessible from the internet, not just from your private network.

One thing worth knowing: GitHub will retry a webhook if your server doesn't respond with a 2xx status within 10 seconds. If your build system is slow to acknowledge, you can end up with duplicate builds. Well-built platforms deduplicate by commit SHA.

Step 2: The Build Queue

The deployment server receives the webhook, parses the payload, and queues a build job. It stores the commit SHA, the branch, and a pointer to the repository.

Queuing matters because pushes can come faster than builds complete. If two developers push within 30 seconds of each other, the second push should wait for the first build to finish (or cancel it, depending on the platform's configuration). Naively triggering a concurrent build for every push causes resource contention and race conditions on the traffic swap.

Platforms handle this differently: Vercel runs builds in parallel on separate infrastructure. Coolify queues them on your server. The right behavior depends on whether you have enough build capacity to run concurrently.

Step 3: Clone and Detect

The build agent clones the repository at the specific commit SHA. This is always a specific SHA, not just the branch head, because the branch might advance between when the webhook fired and when the build starts.

After cloning, the build system detects how to build the app. There are two common approaches:

Dockerfile detection. If a Dockerfile is present at the root, use it. The developer has already specified the build process. This is the most explicit option and the least surprising.

Buildpacks. If there's no Dockerfile, Cloud Native Buildpacks (CNB) scan the repository for language indicators: a package.json suggests Node.js, a requirements.txt suggests Python, a go.mod suggests Go. The matching buildpack downloads the right runtime, installs dependencies, and produces a container image. Heroku pioneered this model; the CNB specification standardized it. Nixpacks is a newer alternative used by Railway that takes a similar detect-and-build approach with Nix-based reproducible builds.

The advantage of buildpacks: you push code without a Dockerfile and the platform figures it out. The downside: if your app needs something non-standard, the buildpack's defaults might not be right, and debugging why takes longer than just writing a Dockerfile.

Step 4: The Container Build

The actual build runs inside Docker (or a Docker-compatible builder like BuildKit). For a Node.js app, this means npm ci followed by your build command. For a Python app, pip install. For a Go binary, go build.

Build logs stream in real time to the dashboard. This is worth appreciating: you're watching exactly what would happen if you ran docker build locally. If the build fails because a dependency version is missing or an environment variable is undefined, the error is right there in the log.

A detail that matters for build speed: Docker layer caching. A well-structured Dockerfile copies package.json and runs npm install before copying the rest of the source code. That way, the installed dependencies layer gets cached between builds, and only the application code layer gets rebuilt on each push. A poorly structured Dockerfile invalidates the cache on every build. The difference is 30 seconds versus 4 minutes for a typical Node.js app.

Step 5: Health Check Before Traffic Switch

Before the new container gets any production traffic, most deployment platforms run a health check. The new container starts, and the platform pings a health endpoint (usually /health or just /) and waits for a 200 response.

This step is what makes zero-downtime deployment possible. The old container keeps serving requests while the new one warms up. If the new container never passes its health check (because the new code has a startup bug, or the database migration failed, or a required environment variable is missing), it never receives traffic. The old version stays live.

Without a health check, the platform would just replace the old container with the new one and hope. Sometimes it works. Sometimes users get 502 errors for 10-30 seconds while the new container cold-starts.

Step 6: The Traffic Swap

Once the new container passes its health check, the proxy routes new requests to it. The mechanism varies by platform:

Nginx-based platforms update an upstream block and reload the nginx config. This works but has a brief gap where in-flight requests can be interrupted.

Traefik (used by Coolify and Dokploy) supports dynamic configuration: it picks up the new container via Docker labels without restarting. In-flight requests on the old container are generally handled gracefully, though the behavior depends on Traefik's version and configuration.

Edge networks (Vercel, Cloudflare) route traffic via their global infrastructure with connection draining behavior, ensuring in-flight requests complete on the old version before it's removed.

The key distinction is between "stop sending new requests to old container" and "wait for old requests to finish before stopping the old container." The second is harder to implement correctly, but it's the difference between zero-downtime and almost-zero-downtime.

Step 7: Cleanup

After traffic moves to the new container, the old container stops. Container images from old deploys get retained for a configurable period (to support rollbacks) and then pruned. Build artifacts get cleaned up.

This cleanup step is easy to neglect in a DIY setup and causes a subtle problem: if you're running frequent deploys, old Docker images accumulate and fill your disk. Platforms handle this automatically; a bare Docker setup needs a cron job running docker system prune.

The Full Pipeline, Summarized

git push
  -> webhook fires (HTTP POST to your deployment server)
  -> build queued (deduplicated by SHA)
  -> repository cloned at commit SHA
  -> build detection (Dockerfile or buildpacks)
  -> container build (Docker, logs stream to dashboard)
  -> health check (new container must pass before traffic switches)
  -> traffic swap (proxy re-routes requests to new container)
  -> old container drains and stops
  -> image cleanup

Every platform that does git-push deploys, whether it's Vercel, Netlify, Coolify, Heroku, or a self-hosted tool, runs some version of this same pipeline. If you want to see what the manual version looks like without any platform, we wrote a full walkthrough: how to deploy Next.js to a VPS the manual way. For more on the traffic swap specifically, see how to add zero-downtime deployments to any Docker app. The differences are in speed, correctness of the traffic swap, and what you have to configure manually versus what's automatic.

When a deploy fails, it's almost always at one of three steps: the build (a code error), the health check (a startup bug or missing env var), or the traffic swap (a proxy misconfiguration). Knowing which step failed cuts your debugging time in half.

#deployment#git#devops#docker#webhooks#buildpacks#git push deploy how it works