March 12, 2026 (1mo ago)
Written by Temps Team
Last updated March 23, 2026 (1mo ago)
Vercel's killer feature isn't hosting. It's preview deployments. Every pull request gets a unique URL where reviewers can see exactly what changed, click around, and catch issues that never show up in a code diff. But Vercel charges per seat, and preview builds count against your plan limits. At scale, you're paying hundreds per month for what's essentially a Docker container behind a subdomain.
The good news: you can build this for any Docker app on your own infrastructure. The pattern is straightforward -- git webhook, container build, dynamic subdomain routing, PR comment with the URL, cleanup on merge. This guide walks through the full architecture, a working DIY implementation, and how modern self-hosted platforms handle it natively.
TL;DR: Preview environments give every pull request a unique, shareable URL for visual review before merge. Teams using deploy previews catch 45% more UI bugs before production. You can build them yourself with GitHub webhooks, Docker, and wildcard DNS -- or use a self-hosted platform like Temps that handles per-branch deploys, auto-cleanup, and scale-to-zero out of the box.
Teams using preview environments report 45% faster code review cycles according to the Argo Project's analysis of GitOps adoption patterns. The reason is simple: visual review catches what code review misses. A CSS change that looks fine in a diff can break an entire layout -- and nobody catches it until production.
Here's what preview environments unlock for each role on your team:
For developers. No more "pull my branch and run it locally" messages in Slack. Reviewers click a link, see the change, leave feedback. The review cycle drops from hours to minutes because there's zero setup friction.
For designers. They don't need Git access or a local dev environment. A URL in a PR comment is all they need to verify that the implementation matches the mockup. Pixel-level feedback happens in the PR, not in a separate design review meeting.
For QA. Testing against a deployed environment catches issues that localhost hides -- real DNS resolution, production-like networking, actual SSL certificates. QA can test multiple PRs simultaneously without juggling local branches.
For PMs and stakeholders. Feature demos happen by sharing a link, not scheduling a screen share. A PM can review a feature on their phone during lunch, leave a comment, and the developer sees it immediately.
We've seen teams cut their average PR review time from 2 days to under 4 hours after adopting preview environments. The biggest factor isn't the technology -- it's removing the friction of "I'll review it when I get a chance to pull the branch." When reviewing is as easy as clicking a link, it happens immediately.
The core architecture follows a five-step pipeline that GitHub's own internal tooling has used since 2018, processing over 50 million webhooks daily across their platform. Your preview system needs the same fundamental components, just at a smaller scale.
Here's the full flow:
┌──────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Git Push / │────>│ Webhook Handler │────>│ Build Service │
│ PR Opened │ │ (validate, auth)│ │ (docker build) │
└──────────────┘ └──────────────────┘ └────────┬────────┘
│
v
┌──────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ PR Comment │<────│ DNS / Routing │<────│ Deploy to │
│ with URL │ │ (wildcard cert) │ │ Unique Subdomain│
└──────────────┘ └──────────────────┘ └─────────────────┘
On PR close/merge:
┌──────────────┐ ┌──────────────────┐
│ Webhook: │────>│ Stop & Remove │
│ PR closed │ │ Container │
└──────────────┘ └──────────────────┘
Stage 1: Webhook reception. Your server listens for GitHub (or GitLab) webhook events -- specifically pull_request.opened, pull_request.synchronize (new push), and pull_request.closed. Each event carries the branch name, commit SHA, and PR number.
Stage 2: Build. Clone the repo at the specific commit, run docker build, and tag the image with the PR number or branch name. This is where most of the time goes -- a typical Node.js app takes 1-3 minutes to build.
Stage 3: Deploy. Start the container with environment variables pointing to preview-specific resources (database, API keys). Map it to a unique subdomain like pr-42.preview.yourapp.com.
Stage 4: Notify. Post a comment on the PR with the preview URL. Update the comment on each subsequent push rather than creating a new one. Set a GitHub commit status to "deployed" so reviewers know the preview is ready.
Stage 5: Cleanup. When the PR closes (merge or abandon), stop the container, remove the image, and clean up any preview-specific resources like databases or storage volumes.
[IMAGE: Architecture diagram showing the webhook-to-deployment pipeline for preview environments -- preview environment architecture deploy pipeline diagram]
Wildcard DNS is the foundation of branch-to-subdomain routing. Cloudflare handles over 20% of all web traffic and offers free wildcard DNS records. A single *.preview.yourapp.com record pointing to your server means any subdomain resolves automatically -- no per-PR DNS configuration needed.
You have two practical options for subdomain naming:
PR-number based: pr-42.preview.yourapp.com
Branch-name based: feature-auth-redesign.preview.yourapp.com
/ with -, strip special characters, truncate to 63 chars)Most teams use the PR-number pattern because it's simpler to implement and less error-prone. The PR comment contains the full context anyway.
Set up a single wildcard A record:
*.preview.yourapp.com → A → YOUR_SERVER_IP
Or if you're behind a load balancer:
*.preview.yourapp.com → CNAME → lb.yourapp.com
That's it. Every subdomain under preview.yourapp.com now resolves to your server. The reverse proxy handles routing each subdomain to the correct container.
HTTPS is non-negotiable -- browsers flag HTTP sites and many APIs refuse non-TLS connections. Let's Encrypt issues free wildcard certificates, but they require DNS-01 validation instead of the simpler HTTP-01 challenge. According to Let's Encrypt's own statistics, they've issued over 5 billion certificates since launch.
Here's how to get a wildcard cert using certbot with the Cloudflare DNS plugin:
# Install certbot with Cloudflare plugin
pip install certbot certbot-dns-cloudflare
# Create Cloudflare credentials file
cat > /etc/letsencrypt/cloudflare.ini << EOF
dns_cloudflare_api_token = YOUR_CLOUDFLARE_API_TOKEN
EOF
chmod 600 /etc/letsencrypt/cloudflare.ini
# Request wildcard certificate
certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \
-d "*.preview.yourapp.com" \
--preferred-challenges dns-01
The certificate renews automatically every 90 days. One certificate covers every preview subdomain you'll ever create.
Building a working preview system requires roughly 200-300 lines of webhook handler code plus reverse proxy configuration. According to CircleCI's report, the median CI pipeline runs 5.6 minutes, and your preview build will fall in that same range for most applications.
This Node.js server receives GitHub webhooks and manages the preview lifecycle:
// preview-server.js
const express = require('express');
const crypto = require('crypto');
const { execSync } = require('child_process');
const { Octokit } = require('@octokit/rest');
const app = express();
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
const WEBHOOK_SECRET = process.env.WEBHOOK_SECRET;
const PREVIEW_DOMAIN = 'preview.yourapp.com';
const REPO_DIR = '/tmp/preview-builds';
// Verify GitHub webhook signature
function verifySignature(req) {
const sig = req.headers['x-hub-signature-256'];
const hmac = crypto.createHmac('sha256', WEBHOOK_SECRET);
const digest = 'sha256=' + hmac.update(req.rawBody).digest('hex');
return crypto.timingSafeEqual(Buffer.from(sig), Buffer.from(digest));
}
app.use(express.json({
verify: (req, _res, buf) => { req.rawBody = buf; }
}));
app.post('/webhook', async (req, res) => {
if (!verifySignature(req)) return res.sendStatus(401);
const event = req.headers['x-github-event'];
const { action, pull_request, repository } = req.body;
if (event !== 'pull_request') return res.sendStatus(200);
const prNumber = pull_request.number;
const branch = pull_request.head.ref;
const sha = pull_request.head.sha;
const containerName = `preview-pr-${prNumber}`;
const previewUrl = `https://pr-${prNumber}.${PREVIEW_DOMAIN}`;
if (action === 'opened' || action === 'synchronize') {
await deployPreview({ repository, branch, sha, containerName,
prNumber, previewUrl });
} else if (action === 'closed') {
await cleanupPreview({ containerName, repository, prNumber });
}
res.sendStatus(200);
});
async function deployPreview({ repository, branch, sha,
containerName, prNumber, previewUrl }) {
const repoUrl = repository.clone_url;
const buildDir = `${REPO_DIR}/${containerName}`;
try {
// Clone and build
execSync(`rm -rf ${buildDir} && git clone --depth 1 \
--branch ${branch} ${repoUrl} ${buildDir}`);
execSync(`docker build -t ${containerName}:${sha} ${buildDir}`);
// Stop old container if exists
try { execSync(`docker stop ${containerName} && \
docker rm ${containerName}`); }
catch {}
// Run new container
execSync(`docker run -d --name ${containerName} \
--network preview-net \
-e NODE_ENV=preview \
-e DATABASE_URL=postgresql://preview:preview@db:5432/pr_${prNumber} \
--label "traefik.enable=true" \
--label "traefik.http.routers.${containerName}.rule=\
Host(\`pr-${prNumber}.${PREVIEW_DOMAIN}\`)" \
--label "traefik.http.routers.${containerName}.tls=true" \
${containerName}:${sha}`);
// Comment on PR
await octokit.issues.createComment({
owner: repository.owner.login,
repo: repository.name,
issue_number: prNumber,
body: `Deploy preview ready:\n${previewUrl}\n\nCommit: ${sha.slice(0, 7)}`
});
} catch (err) {
console.error(`Preview deploy failed for PR #${prNumber}:`, err);
}
}
async function cleanupPreview({ containerName, repository, prNumber }) {
try {
execSync(`docker stop ${containerName} && docker rm ${containerName}`);
execSync(`docker rmi $(docker images ${containerName} -q) \
2>/dev/null || true`);
console.log(`Cleaned up preview for PR #${prNumber}`);
} catch (err) {
console.error(`Cleanup failed for PR #${prNumber}:`, err);
}
}
app.listen(9090, () => console.log('Preview server running on :9090'));
Traefik discovers containers automatically through Docker labels. No configuration reload needed when previews are created or destroyed.
# docker-compose.yml
services:
traefik:
image: traefik:v3.2
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.le.acme.dnschallenge=true"
- "--certificatesresolvers.le.acme.dnschallenge.provider=cloudflare"
- "--certificatesresolvers.le.acme.email=you@yourapp.com"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- letsencrypt:/letsencrypt
environment:
- CF_API_EMAIL=you@yourapp.com
- CF_DNS_API_TOKEN=your-cloudflare-token
networks:
- preview-net
networks:
preview-net:
external: true
volumes:
letsencrypt:
If you'd rather use GitHub Actions instead of a self-hosted webhook handler, here's a workflow that builds and deploys via SSH:
# .github/workflows/preview.yml
name: Deploy Preview
on:
pull_request:
types: [opened, synchronize, closed]
concurrency:
group: preview-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
deploy:
if: github.event.action != 'closed'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and push image
run: |
docker build -t registry.yourapp.com/app:pr-${{ github.event.pull_request.number }} .
docker push registry.yourapp.com/app:pr-${{ github.event.pull_request.number }}
- name: Deploy to preview server
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.PREVIEW_HOST }}
username: deploy
key: ${{ secrets.DEPLOY_KEY }}
script: |
docker pull registry.yourapp.com/app:pr-${{ github.event.pull_request.number }}
docker stop preview-pr-${{ github.event.pull_request.number }} || true
docker rm preview-pr-${{ github.event.pull_request.number }} || true
docker run -d \
--name preview-pr-${{ github.event.pull_request.number }} \
--network preview-net \
--label "traefik.enable=true" \
--label "traefik.http.routers.pr-${{ github.event.pull_request.number }}.rule=Host(\`pr-${{ github.event.pull_request.number }}.preview.yourapp.com\`)" \
registry.yourapp.com/app:pr-${{ github.event.pull_request.number }}
- name: Comment PR
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `Preview deployed: https://pr-${context.issue.number}.preview.yourapp.com`
})
cleanup:
if: github.event.action == 'closed'
runs-on: ubuntu-latest
steps:
- name: Remove preview
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.PREVIEW_HOST }}
username: deploy
key: ${{ secrets.DEPLOY_KEY }}
script: |
docker stop preview-pr-${{ github.event.pull_request.number }} || true
docker rm preview-pr-${{ github.event.pull_request.number }} || true
This minimal setup handles 90% of preview environment use cases. We've tested it with teams running 15-20 concurrent PRs on a single 4-core, 8GB server without performance issues. The bottleneck is always build time, not runtime.
Running preview environments at scale introduces challenges that the basic setup doesn't address. According to Flexera, organizations waste 28% of cloud spend on idle or underused resources, and always-on preview environments are a prime contributor to that waste.
Every preview needs its own data. You have three options, each with tradeoffs:
Shared database, separate schemas. Create a new PostgreSQL schema per PR (CREATE SCHEMA pr_42). Cheap and fast to provision. Downside: migrations run against every schema, and cross-schema pollution is possible if your app uses hardcoded schema references.
Database per preview. Spin up a fresh database container for each PR. Full isolation but heavy on resources -- each PostgreSQL instance uses 30-50MB of baseline RAM. With 20 open PRs, that's 600MB-1GB just for databases.
Snapshot-based seeding. Restore a database snapshot (pg_dump) for each preview. Best of both worlds: isolated data without running extra database servers. The snapshot stays in sync with your seed script. This is what most mature teams settle on.
Each preview environment may need different API keys, OAuth redirect URLs, or service endpoints. Hardcoding environment variables in your deploy script works for five previews. It doesn't work for fifty.
The pattern that scales: store a .env.preview template in your repo with placeholder values. Your deploy script substitutes PR-specific values at container start time. Secrets stay in your CI/CD system's secret store, never in the repository.
A typical preview environment consumes 256MB-1GB of RAM. Multiply by 20 open PRs, and you need 5-20GB of RAM just for previews. On a cloud VPS, that's $20-80/month of always-on compute for environments that get visited maybe twice during code review.
Two mitigation strategies work well:
PRs sometimes close without triggering the webhook. The author deletes their fork, GitHub has a webhook delivery failure, or your server was down when the close event fired. Over weeks, orphaned containers accumulate.
Build a garbage collector that runs daily: query the GitHub API for all open PRs, compare against running preview containers, and stop any container whose PR no longer exists. It's five lines of bash:
#!/bin/bash
# cleanup-orphans.sh -- run via cron daily
RUNNING=$(docker ps --filter "name=preview-pr-" --format "{{.Names}}" \
| grep -oP '\d+')
for PR in $RUNNING; do
STATE=$(gh pr view "$PR" --json state --jq '.state' 2>/dev/null)
if [ "$STATE" != "OPEN" ]; then
echo "Removing orphaned preview for PR #$PR"
docker stop "preview-pr-$PR" && docker rm "preview-pr-$PR"
fi
done
The biggest operational pain with DIY preview environments isn't any single challenge -- it's the compound maintenance burden. Database provisioning, orphan cleanup, secret management, SSL renewal, and resource monitoring each take 30 minutes to set up. But they each break independently, and debugging a failed preview deploy at 2am because the wildcard cert expired is exactly the kind of toil that makes teams abandon the system entirely.
Temps treats preview environments as a first-class feature, not a CI/CD bolt-on. Connect a GitHub repository, and every pull request automatically gets a deployed preview at a unique subdomain. The DORA 2024 report found that elite teams deploy on demand with a change failure rate below 5% -- preview environments are a key enabler of that velocity because every change is verified before merge.
When you push to a branch with an open PR, Temps:
pr-42.your-project.temps.runNo GitHub Actions workflow to write. No Traefik configuration. No wildcard DNS setup. It's handled by the platform.
Preview environments are natural candidates for scale-to-zero. They're accessed briefly during review and sit idle for hours. Temps supports on-demand mode where preview containers automatically stop after a configurable idle timeout and wake on the next HTTP request.
{
"on_demand": true,
"idle_timeout_seconds": 300,
"wake_timeout_seconds": 30
}
Wake-up takes 2-5 seconds because the image is already cached. The reviewer sees a brief loading state, then the full application. For teams running 20+ concurrent PRs, on-demand mode cuts preview resource usage by 60-80%.
When a PR is merged or closed, Temps automatically stops and removes the preview container. No orphan accumulation. No daily cron jobs. The cleanup webhook fires reliably because it's part of the same system that created the preview -- there's no separate CI/CD pipeline that can fail independently.
The PR comment includes:
The comment updates in place on subsequent pushes rather than creating a new comment per commit. Reviewers always see the current state without scrolling through comment history.
Running preview environments on your own server costs $5-20/month depending on how many concurrent PRs your team maintains. A 4GB VPS ($6-12/month on providers like Hetzner) comfortably handles 10-15 concurrent preview containers. Compare that to Vercel's Pro plan at $20/seat/month -- a team of five pays $100/month before hitting build minute limits. Scale-to-zero reduces costs further by stopping idle containers automatically.
Yes, each preview should have isolated data to prevent test pollution. The most practical approach is snapshot-based seeding: restore a small pg_dump for each preview that contains representative test data. This gives full isolation without running separate database servers. Temps lets you configure per-environment database URLs so each preview connects to its own schema or database instance.
Absolutely. The key is detecting which services changed in the PR and only building those. GitHub Actions' paths filter handles this natively. For self-hosted setups, compare the changed file paths against your service directories and trigger builds selectively. Temps supports monorepo deployments with automatic service detection based on Dockerfile location.
The easiest way is to use a platform that handles it natively with zero configuration. With Temps, you connect your GitHub repo and every pull request automatically gets a live preview environment at a unique URL — no GitHub Actions workflow, no Traefik setup, no wildcard DNS configuration. Just push a branch, open a PR, and the preview URL appears as a comment within minutes. Temps auto-spins preview environments per pull request, updates them on every push, and cleans them up on merge. For teams that want full control, you can also build the system yourself using webhooks, Docker, and a reverse proxy as described above — but expect 2-4 hours of initial setup plus ongoing maintenance.
OAuth providers require registered redirect URLs, which won't match dynamic preview subdomains. Three workarounds: use a wildcard redirect URL if your OAuth provider supports it (Google does with verified domains), use a shared auth proxy that handles OAuth and forwards the session cookie, or bypass OAuth entirely in preview environments using magic link or test credentials. Never disable authentication completely -- preview URLs are publicly accessible.
Preview environments remove the biggest bottleneck in code review: the setup friction. When every pull request comes with a live, clickable URL, reviews happen faster, bugs get caught earlier, and stakeholders stay in the loop without scheduling demo calls.
You can build the system yourself. A webhook handler, Docker, Traefik, and wildcard DNS give you the core functionality in an afternoon. The ongoing maintenance -- orphan cleanup, cert renewal, database provisioning, resource monitoring -- is where the real time cost lives.
If you'd rather skip the plumbing and get preview environments working in five minutes:
curl -fsSL temps.sh/install.sh | bash
Connect your GitHub repo, push a branch, and watch the preview URL appear in your PR. Every push updates it. Every merge cleans it up. No Actions workflow to maintain, no Traefik to configure, no wildcard certs to renew.