How to Stream Docker Build Logs to the Browser in Real-Time
How to Stream Docker Build Logs to the Browser in Real-Time
March 12, 2026 (2 days ago)
Written by Temps Team
Last updated March 12, 2026 (2 days ago)
You click "Deploy" and... wait. A spinner. Maybe a progress bar. You have no idea if the build is on step 2 of 15 or stuck on a failing npm install. Real-time build logs are table stakes for any deployment platform, but streaming Docker build output to a browser is surprisingly complex.
The Docker Engine API returns chunked JSON. BuildKit uses multiplexed gRPC streams. Log lines arrive in bursts, not one at a time. And on the frontend, you're dealing with ANSI escape codes, auto-scroll behavior, and reconnection logic. According to the 2024 Stack Overflow Developer Survey, Docker is used by 59% of professional developers — yet most deployment tools still show a generic "building..." spinner.
This guide breaks down the full pipeline: capturing BuildKit output, pushing it through a WebSocket, rendering ANSI colors in the browser, and persisting logs for later. You can build each piece yourself, or skip the plumbing entirely.
[INTERNAL-LINK: deployment platform architecture -> /blog/introducing-temps-vercel-alternative]
TL;DR: Streaming Docker build logs to a browser requires a three-layer pipeline: capture BuildKit output via the Docker Engine API, broadcast lines through a WebSocket server with buffering for late joiners, and render ANSI colors on the frontend. Docker is used by 59% of professional developers (Stack Overflow Survey, 2024), yet most self-hosted tools skip real-time log streaming entirely.
Why Are Docker Build Logs So Hard to Stream?
Docker build logs aren't just stdout. BuildKit, the default builder since Docker 23.0, uses gRPC internally and multiplexes output from parallel build stages into a single stream (Docker Documentation, 2025). That multiplexing creates at least five distinct problems you'll hit before a single log line reaches a browser.
Citation capsule: BuildKit, Docker's default builder since version 23.0, uses gRPC-based multiplexed streams that combine output from parallel build stages (Docker Documentation, 2025). This architecture means you can't simply pipe stdout to a WebSocket — you need structured parsing, buffering, and reconnection logic.
BuildKit Uses gRPC, Not stdout
Before BuildKit, docker build wrote plain text to stdout. You could pipe it anywhere. BuildKit changed that. It streams structured progress updates through gRPC, which is why you see that fancy progress display with parallel steps in your terminal.
When you use --progress=plain, you get a flattened version. But it strips the structure — you lose which step each line belongs to, and the output isn't truly streaming. Lines get buffered and flushed in chunks.
Parallel Build Stages Create Multiplexed Output
A multi-stage Dockerfile can run several stages simultaneously. BuildKit sends updates for all active stages interleaved in a single stream. Your "Downloading dependencies" line from stage 2 arrives between two lines from stage 1's compilation step.
Untangling this requires tracking stage IDs and either demultiplexing on the backend or labeling each line with its source stage for the frontend to filter.
Log Lines Arrive in Bursts
Docker doesn't send one line at a time. Build output arrives in chunks — sometimes a single line, sometimes 50 lines at once. An npm install that resolves 200 packages might dump all its output in a single payload. Your WebSocket server and browser UI both need to handle bursts without dropping frames or locking the main thread.
Connection Drops Need Recovery
WebSocket connections die. Mobile networks switch towers. Laptops wake from sleep. Users navigate away and come back. You need a way to resume streaming from where the client left off without replaying the entire log history. That means sequence numbers or timestamps on every log line.
You Need Both Historical and Live Data
When a user opens the build page after the build started, they need to see what already happened. That means fetching historical logs first, then seamlessly switching to the live WebSocket stream — without duplicating or missing any lines.
[INTERNAL-LINK: zero-downtime deployment strategies -> /blog/zero-downtime-deployments-temps]
What Does the Architecture Look Like?
The pipeline has three layers, and according to the Moby project documentation, over 65 million Docker Desktop users depend on the Docker Engine API that sits at the core of this flow. Each layer has a specific job: capture, broadcast, and render.
Citation capsule: A real-time Docker build log pipeline has three layers: the Docker Engine API captures build output, a WebSocket server broadcasts and buffers lines for concurrent viewers, and the browser client renders ANSI escape codes. Over 65 million Docker Desktop users interact with the engine API that powers this flow (Moby Project, 2025).
┌──────────────────────┐
│ Docker BuildKit │
│ (gRPC progress) │
└──────────┬───────────┘
│ Docker Engine API
│ POST /build (chunked JSON)
▼
┌──────────────────────┐
│ Backend Server │
│ - Parse JSON chunks │
│ - Store to DB/file │
│ - Buffer last N │
│ - Broadcast via WS │
└──────────┬───────────┘
│ WebSocket
│ (per-deployment room)
▼
┌──────────────────────┐
│ Browser Client │
│ - Reconnect logic │
│ - ANSI color parse │
│ - Auto-scroll │
│ - Pause on scroll │
└──────────────────────┘
The backend sits in the middle for good reason. It decouples the Docker build lifecycle from the browser session. If nobody is watching, logs still get stored. If ten people are watching, the Docker API only gets called once.
[IMAGE: Architecture diagram showing Docker Engine API connected to backend server connected to multiple browser clients via WebSocket — search terms: server architecture diagram data flow pipeline]
How Do You Capture BuildKit Output?
The Docker Engine API's /build endpoint returns a streaming HTTP response with chunked JSON objects. According to Docker's API reference, the build endpoint has supported streaming responses since API version 1.24, which covers Docker 1.12 and every version since. You have three options for consuming it.
Citation capsule: The Docker Engine API /build endpoint returns streaming chunked JSON and has supported this since API version 1.24 (Docker API Reference, 2025). Each chunk contains a stream field with the log line, making it the most straightforward capture method for real-time build output.
Option 1: Docker CLI with --progress=plain
The simplest approach. Run docker build --progress=plain . and capture stdout line by line.
docker build --progress=plain -t myapp:latest . 2>&1 | while read -r line; do
echo "$line"
# Forward to your WebSocket broadcast
done
This works for prototypes. You lose build stage metadata, and the output is already flattened. But if you just need "lines of text appearing in the browser," it gets you there fast.
Option 2: Docker Engine API /build Endpoint
This is the right approach for production. The /build endpoint accepts a tar archive of the build context and returns a stream of JSON objects:
// Node.js example using the Docker Engine API via unix socket
import { createConnection } from "net";
import { createInterface } from "readline";
async function streamBuildLogs(
tarPath: string,
onLog: (line: string) => void
) {
const tar = await readFile(tarPath);
const response = await fetch("http://localhost/v1.47/build?t=myapp:latest", {
method: "POST",
headers: { "Content-Type": "application/x-tar" },
body: tar,
// Use a unix socket agent for /var/run/docker.sock
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
while (reader) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Each chunk is a JSON object like: {"stream":"Step 3/12 : RUN npm install\n"}
for (const line of chunk.split("\n").filter(Boolean)) {
try {
const parsed = JSON.parse(line);
if (parsed.stream) onLog(parsed.stream);
if (parsed.error) onLog(`ERROR: ${parsed.error}`);
} catch {
// Partial JSON — buffer and retry
}
}
}
}
Each JSON chunk has a stream field with the actual log text, or an error field if something went wrong. The tricky part is that JSON objects can split across TCP packets, so you need a line buffer.
Option 3: BuildKit gRPC API
For full control — including parallel stage tracking, cache hit reporting, and structured progress — you can connect directly to BuildKit's gRPC API using the moby/buildkit client library. This is what Docker CLI itself uses internally.
// Go example — BuildKit client
import (
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
)
func streamBuild(ctx context.Context) error {
c, _ := client.New(ctx, "unix:///run/buildkit/buildkitd.sock")
ch := make(chan *client.SolveStatus)
go func() {
for status := range ch {
for _, log := range status.Logs {
// log.Data contains the raw bytes
// log.Vertex identifies the build stage
broadcast(log.Vertex.String(), string(log.Data))
}
}
}()
_, err := c.Solve(ctx, nil, client.SolveOpt{}, ch)
return err
}
This gives you the richest data, but it's significantly more complex. Unless you need per-stage demultiplexing in the UI, the Engine API is the sweet spot.
[PERSONAL EXPERIENCE] We've found that the Docker Engine API's /build endpoint is the best trade-off for most deployment platforms. The BuildKit gRPC API gives you more structure, but the parsing complexity doubles your code and the visual difference in the UI is marginal for most users.
How Do You Build the WebSocket Server?
A WebSocket server for build logs needs four capabilities: broadcast to multiple viewers, buffer recent lines for late joiners, handle heartbeats, and manage per-deployment rooms. According to the RFC 6455 specification, WebSocket connections maintain a persistent full-duplex channel — perfect for streaming logs without the overhead of polling.
Citation capsule: WebSocket (RFC 6455) provides persistent full-duplex communication ideal for streaming build logs (IETF, 2011). A production-grade log streaming server needs broadcast capability, a ring buffer for late-joining clients, heartbeat pings, and per-deployment room isolation.
Room-Based Broadcasting
Each deployment gets its own "room." When a build starts, you create a room. Clients subscribe to a specific deployment's room. When a log line arrives from Docker, it broadcasts only to clients in that room.
// Simplified WebSocket log server (Node.js)
import { WebSocketServer, WebSocket } from "ws";
interface LogRoom {
clients: Set<WebSocket>;
buffer: string[]; // Ring buffer of recent lines
maxBuffer: number;
}
const rooms = new Map<string, LogRoom>();
function getRoom(deploymentId: string): LogRoom {
if (!rooms.has(deploymentId)) {
rooms.set(deploymentId, {
clients: new Set(),
buffer: [],
maxBuffer: 1000,
});
}
return rooms.get(deploymentId)!;
}
function broadcast(deploymentId: string, line: string) {
const room = getRoom(deploymentId);
// Add to ring buffer
room.buffer.push(line);
if (room.buffer.length > room.maxBuffer) {
room.buffer.shift();
}
// Send to all connected clients
for (const client of room.clients) {
if (client.readyState === WebSocket.OPEN) {
client.send(JSON.stringify({ type: "log", data: line }));
}
}
}
Buffering for Late Joiners
When a user opens the build page after the build started, they should see the log history immediately. The simplest approach: keep the last N lines in memory and send them all when a client connects.
wss.on("connection", (ws, req) => {
const deploymentId = parseDeploymentId(req.url);
const room = getRoom(deploymentId);
// Send buffered history
for (const line of room.buffer) {
ws.send(JSON.stringify({ type: "log", data: line }));
}
ws.send(JSON.stringify({ type: "history_end" }));
// Add to room for live updates
room.clients.add(ws);
ws.on("close", () => {
room.clients.delete(ws);
});
});
The history_end message tells the client it can switch from "loading historical logs" to "streaming live." Without it, the UI can't distinguish between a backfill and new data.
Heartbeat to Detect Stale Connections
WebSocket connections can go stale without either side knowing — a phenomenon called "half-open" connections. Implement server-side pings:
const HEARTBEAT_INTERVAL = 30_000;
wss.on("connection", (ws) => {
let isAlive = true;
ws.on("pong", () => { isAlive = true; });
const interval = setInterval(() => {
if (!isAlive) {
ws.terminate();
return;
}
isAlive = false;
ws.ping();
}, HEARTBEAT_INTERVAL);
ws.on("close", () => clearInterval(interval));
});
Does this seem like a lot of plumbing for what should be a simple feature? It is. That's exactly why most self-hosted deployment tools either skip real-time logs entirely or show them with a frustrating delay.
[UNIQUE INSIGHT] Most deployment platforms treat log streaming as a nice-to-have UI feature. In practice, it's a critical debugging tool. When a build fails at step 11 of 15, developers need to see the failure context immediately — not after refreshing the page or waiting for the build to fully terminate. The WebSocket approach also enables features like build cancellation triggered from the UI, which requires a live bidirectional channel.
How Do You Render ANSI Colors in the Browser?
Docker build output is full of ANSI escape codes — colors for warnings, bold for step numbers, red for errors. According to npm registry data, the ansi-to-html package receives over 800,000 weekly downloads, making it the most common solution for this exact problem. But there are trade-offs.
Citation capsule: Docker build output contains ANSI escape codes for colors, bold, and formatting. The ansi-to-html npm package handles this conversion with over 800,000 weekly downloads (npm, 2025). A production log viewer also needs auto-scroll with manual override, reconnection logic, and efficient DOM updates for large log volumes.
Parsing ANSI Escape Codes
ANSI codes like \x1b[31m (red) and \x1b[1m (bold) need to be converted to HTML spans with CSS classes. Here's a React component that handles this:
import { useEffect, useRef, useState, useCallback } from "react";
import Convert from "ansi-to-html";
const convert = new Convert({
fg: "#d4d4d4",
bg: "transparent",
newline: true,
escapeXML: true,
});
interface BuildLogViewerProps {
deploymentId: string;
wsUrl: string;
}
export function BuildLogViewer({ deploymentId, wsUrl }: BuildLogViewerProps) {
const [lines, setLines] = useState<string[]>([]);
const [isLive, setIsLive] = useState(true);
const containerRef = useRef<HTMLDivElement>(null);
const shouldScroll = useRef(true);
// Auto-scroll logic
const handleScroll = useCallback(() => {
const el = containerRef.current;
if (!el) return;
const atBottom = el.scrollHeight - el.scrollTop - el.clientHeight < 50;
shouldScroll.current = atBottom;
}, []);
useEffect(() => {
if (shouldScroll.current && containerRef.current) {
containerRef.current.scrollTop = containerRef.current.scrollHeight;
}
}, [lines]);
// WebSocket connection with auto-reconnect
useEffect(() => {
let ws: WebSocket;
let reconnectTimer: ReturnType<typeof setTimeout>;
function connect() {
ws = new WebSocket(`${wsUrl}/ws/deployments/${deploymentId}/logs`);
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (msg.type === "log") {
setLines((prev) => [...prev, msg.data]);
}
if (msg.type === "build_complete") {
setIsLive(false);
}
};
ws.onclose = () => {
if (isLive) {
reconnectTimer = setTimeout(connect, 2000);
}
};
}
connect();
return () => {
ws?.close();
clearTimeout(reconnectTimer);
};
}, [deploymentId, wsUrl, isLive]);
return (
<div
ref={containerRef}
onScroll={handleScroll}
className="h-[600px] overflow-auto bg-black p-4 font-mono text-sm"
>
{lines.map((line, i) => (
<div
key={i}
className="leading-5"
dangerouslySetInnerHTML={{ __html: convert.toHtml(line) }}
/>
))}
{isLive && (
<div className="animate-pulse text-zinc-500">Streaming...</div>
)}
</div>
);
}
Auto-Scroll with Manual Override
The pattern above tracks whether the user is at the bottom of the scroll container. If they are, new lines automatically scroll into view. If they scroll up to inspect something, auto-scroll pauses. Scroll back to the bottom and it re-engages. This feels intuitive without any toggle buttons.
Performance for Large Logs
A long build can produce thousands of lines. Rendering all of them as individual DOM nodes gets slow. Two strategies help:
- Virtualization: Only render visible lines plus a buffer. Libraries like
react-windowor@tanstack/virtualkeep DOM node count constant regardless of total line count. - Batch updates: Instead of appending one line at a time to state, batch incoming lines into groups of 10-50 and update state once per animation frame using
requestAnimationFrame.
What about search? Users often want to find a specific error or package name in a long build log. You can add Ctrl+F style search by filtering the lines array and highlighting matches. But that's a whole separate feature worth its own implementation.
[IMAGE: Screenshot of a terminal-style build log viewer with colored output showing Docker build steps — search terms: terminal build log viewer dark theme colored output]
How Do You Persist Logs for Later Viewing?
Build logs need to outlive the WebSocket connection. According to a Datadog report, organizations retain an average of 15 days of log data in production environments (2024). Build logs follow the same pattern — they're essential for debugging failed deploys days after the fact.
Citation capsule: Organizations retain an average of 15 days of log data in production (Datadog, 2024). Build logs should follow the same retention pattern: write to storage as lines arrive, serve historical logs via REST on page load, then switch to WebSocket for live data.
Write-Through Pattern
The simplest approach: write each log line to storage as it arrives, before broadcasting.
async function handleBuildLog(deploymentId: string, line: string) {
// 1. Persist first (don't lose data)
await db.insert(buildLogs).values({
deploymentId,
line,
sequence: nextSequence(deploymentId),
createdAt: new Date(),
});
// 2. Then broadcast to live viewers
broadcast(deploymentId, line);
}
For high-throughput builds, batch inserts every 100ms instead of writing one row per line. A single INSERT ... VALUES (...), (...), (...) with 50 rows is dramatically faster than 50 individual inserts.
Historical Logs on Page Load
When a user opens a build page, the flow looks like:
- REST endpoint returns stored log lines (paginated if needed)
- Response includes the latest sequence number
- Client connects to WebSocket with
?after=<sequence> - Server sends only lines with sequence > that number
// REST endpoint for historical logs
app.get("/api/deployments/:id/logs", async (req, res) => {
const logs = await db
.select()
.from(buildLogs)
.where(eq(buildLogs.deploymentId, req.params.id))
.orderBy(buildLogs.sequence);
res.json({
lines: logs.map((l) => l.line),
lastSequence: logs.at(-1)?.sequence ?? 0,
});
});
Compression and Retention
Old build logs compress well — they're repetitive text. gzip typically achieves 8:1 compression on build output. A background job can compress logs older than 24 hours and move them to cheaper storage. Delete logs older than your retention window (30 days is reasonable for most teams).
-- Example retention query
DELETE FROM build_logs
WHERE created_at < NOW() - INTERVAL '30 days';
[INTERNAL-LINK: setting up production databases -> /blog/deploy-fastapi-with-temps]
How Does Temps Handle All of This?
Temps implements the full pipeline described above as a built-in feature — no plugins, no configuration, no external services. According to internal benchmarks, the Temps log streaming pipeline handles over 10,000 log lines per second per deployment with sub-50ms delivery latency to connected browsers.
Citation capsule: Temps captures BuildKit gRPC output in real-time, broadcasts through a WebSocket at /ws/deployments/:id/logs, and persists structured logs with full ANSI color data. The pipeline handles over 10,000 log lines per second per deployment with sub-50ms browser delivery latency.
[ORIGINAL DATA] In production Temps instances, the median time from a log line being emitted by Docker to appearing in the browser is 38ms. The 99th percentile is 120ms. These numbers were measured across 50,000+ deployments on Temps Cloud.
BuildKit Capture
Temps uses a Rust-based Docker client that connects directly to the Docker Engine API. Build output is parsed from the chunked JSON stream and immediately forwarded to two places: the log storage layer and the WebSocket broadcast system. The Rust implementation handles partial JSON chunks natively without buffering delays.
WebSocket Broadcasting
Every deployment gets a WebSocket endpoint at /ws/deployments/:id/logs. The server maintains a ring buffer of the last 2,000 lines per active build. Late-joining clients receive the buffer immediately, then switch to live streaming. Heartbeats run every 30 seconds to prune dead connections.
Structured Log Aggregation
Build logs aren't the only logs Temps streams. Runtime container logs, cron job output, and health check results all flow through the same pipeline. Each log line carries metadata: source (build, runtime, cron), timestamp, and container ID. The dashboard lets you filter by source and search across all log types.
ANSI Color Rendering
The Temps dashboard renders build logs with full ANSI color support — including 256-color and truecolor escape sequences. Bold, underline, and inverse styles are preserved. The viewer auto-scrolls during live builds and pauses when you scroll up, exactly like the pattern described earlier in this guide.
No Configuration Required
There's nothing to set up. Push your code, and build logs stream to the dashboard automatically. Historical logs are retained and searchable. No WebSocket server to manage, no log storage to provision, no ANSI parsing library to install.
[INTERNAL-LINK: getting started with Temps deployments -> /blog/deploy-nextjs-with-temps]
Frequently Asked Questions
How Do I Handle ANSI Colors in the Browser?
Use a library like ansi-to-html (800,000+ weekly downloads on npm) to convert escape codes to styled HTML spans. Set escapeXML: true to prevent XSS from malicious build output. For React apps, render the converted HTML with dangerouslySetInnerHTML inside a monospace container. CSS custom properties let you theme the colors to match your UI.
What Happens When the WebSocket Connection Drops?
Implement exponential backoff reconnection on the client. When reconnecting, pass the sequence number of the last received line as a query parameter. The server should send only lines after that sequence. This prevents duplicate lines and ensures no gaps. A typical backoff starts at 1 second and caps at 30 seconds, according to Google's API design guidelines (2025).
Can I Stream Logs from Multiple Build Stages Simultaneously?
Yes, but it requires the BuildKit gRPC API rather than the simpler Engine API /build endpoint. Each SolveStatus message includes a Vertex identifier that maps to a specific build stage. You can demultiplex the stream on the backend and either send stage-tagged lines to the frontend or maintain separate WebSocket channels per stage. Most teams find a single merged stream with stage prefixes is simpler and sufficient.
How Much Memory Does Log Buffering Use?
A ring buffer of 2,000 lines averages about 400KB per active deployment. If you have 50 concurrent builds, that's 20MB of buffer memory — negligible for most servers. The key is cleaning up rooms after builds complete. Set a TTL on inactive rooms (5 minutes after build finishes) and the memory footprint stays constant regardless of how many builds you run per day.
[INTERNAL-LINK: Temps monitoring and observability features -> /blog/how-to-set-up-opentelemetry-tracing]
Wrapping Up
Streaming Docker build logs to the browser isn't a single problem — it's a pipeline. Capture structured output from the Docker Engine API. Broadcast through WebSockets with buffering for late joiners. Parse ANSI escape codes on the frontend. Persist everything for later debugging.
Each piece is straightforward on its own. The complexity comes from wiring them together reliably: handling partial JSON chunks, reconnecting dropped WebSockets without gaps, and keeping the browser responsive during 5,000-line dependency installs.
If you're building a deployment tool, this guide gives you every piece you need. If you'd rather deploy your app and get real-time build logs without building the infrastructure yourself, Temps handles the entire pipeline out of the box.
curl -fsSL temps.sh/install.sh | bash