March 12, 2026 (1mo ago)
Written by Temps Team
Last updated March 12, 2026 (1mo ago)
Your reverse proxy works perfectly — until a monitoring tool sends a HEAD request over HTTP/2 and hangs forever.
The bug is subtle. Your upstream returns a content-length: 45832 header on a HEAD response. Over HTTP/1.1, the client reads the headers, sees there's no body, and moves on. Over HTTP/2, the client sees content-length: 45832 and waits for 45,832 bytes of body data that will never arrive. The connection stalls. Your uptime monitor reports a timeout. Your CDN prefetch fails silently.
According to Cloudflare's Radar, over 60% of web traffic now uses HTTP/2 or HTTP/3. That means this bug affects the majority of connections your proxy handles. And it's not a new edge case — it's been filed as an issue against nearly every major reverse proxy project.
This post explains exactly what causes the hang, which proxies are affected, and how to fix it in three lines of config.
TL;DR: HEAD responses with a
content-lengthheader cause HTTP/2 clients to hang waiting for body data that never arrives. Over 60% of web traffic uses HTTP/2+. The fix: stripcontent-lengthfrom HEAD responses when proxying over HTTP/2. Most custom proxies get this wrong by default.
HEAD requests account for roughly 2-5% of all HTTP traffic according to HTTP Archive, but they're critical infrastructure glue. RFC 9110 defines HEAD as identical to GET — same headers, same status code — except the server MUST NOT send a body.
HEAD shows up in more places than most developers realize:
Last-Modified or ETag headers before deciding whether to re-fetch content.The spec is clear but leaves a trap. RFC 9110 Section 9.3.2 states:
"The server SHOULD send the same header fields in response to a HEAD request as it would have sent if the request had been a GET, except that the payload header fields MAY be omitted."
That word "MAY" is doing heavy lifting. The server can include content-length in a HEAD response — it represents the size the body would have been. In HTTP/1.1, this is useful information. In HTTP/2, it's a trap.
HTTP/2's multiplexing model fundamentally changes how clients interpret content-length. According to W3Techs, 36% of all websites use HTTP/2 as their primary protocol. Every one of them is potentially affected by this framing mismatch.
In HTTP/1.1, the client knows a HEAD response is complete through several mechanisms:
Even if content-length: 45832 appears in the response, a well-behaved HTTP/1.1 client ignores it for HEAD. The protocol's sequential nature makes this straightforward. Read headers, done.
HTTP/2 multiplexes multiple requests over a single TCP connection. There's no "connection close" per request. Instead, each request-response pair lives in its own stream, and the stream ends when an END_STREAM flag appears on a DATA or HEADERS frame.
Here's where it breaks down:
content-length: 45832.content-length: 45832.content-length: 45832 and expects a DATA frame with 45,832 bytes.END_STREAM on the HEADERS frame.The root cause: some HTTP/2 implementations treat content-length as a contract. If you promise 45,832 bytes and deliver zero, that's a protocol error — or at minimum, an ambiguous state that different clients handle differently.
How badly this fails depends on the client:
| Client | Behavior with content-length on HEAD over HTTP/2 |
|---|---|
| curl (nghttp2) | Handles correctly — ignores body expectation for HEAD |
Go net/http | May hang depending on version and keep-alive settings |
Python httpx | Respects HEAD semantics — no hang |
Node.js http2 | Can hang if stream END_STREAM flag is missing |
Java HttpClient | Varies by implementation — some wait for body |
The inconsistency is the real problem. Your proxy might work fine with one client and break with another. But why leave it to chance?
A 2024 Netcraft survey found that Nginx powers roughly 34% of all active websites. Despite its dominance, even Nginx didn't handle this correctly until specific configuration options were added. The landscape is a patchwork of defaults, and most custom proxies get it wrong.
| Proxy | Default behavior for HEAD + HTTP/2 | Safe by default? |
|---|---|---|
| Nginx | Strips content-length from HEAD over HTTP/2 | Yes |
| HAProxy | Passes content-length through unchanged | No |
| Envoy | Configurable via http2_protocol_options | Depends |
| Caddy | Strips content-length from HEAD responses | Yes |
| Traefik | Passes through by default | No |
| Pingora | Passes through unless explicitly handled | No |
| Custom (hyper, h2) | Almost always passes through | No |
If you're building a proxy with hyper, h2, or any low-level HTTP/2 library, you're responsible for handling this yourself. The library gives you raw frames and headers — it doesn't strip content-length for you. That's by design. Libraries stay close to the wire format and leave policy decisions to you.
But "leave it to the developer" means most developers never think about it until a monitoring tool starts reporting phantom timeouts.
The fix is straightforward: if the request method is HEAD and the downstream connection is HTTP/2, strip the content-length header from the response. Keep everything else — content-type, etag, last-modified, cache-control. Only content-length causes the hang.
Nginx handles this correctly by default for HTTP/2 frontends. But if you're proxying HEAD requests through a chain of Nginx instances (e.g., edge -> origin), verify the upstream leg too:
server {
listen 443 ssl;
http2 on;
location / {
proxy_pass http://upstream;
proxy_http_version 1.1; # upstream stays HTTP/1.1
# Nginx automatically strips content-length
# from HEAD responses on HTTP/2 frontend
}
}
If you need explicit control:
# Force strip content-length for HEAD responses
if ($request_method = HEAD) {
more_set_headers -s "200 204 301 302" "Content-Length:";
}
Note: the more_set_headers directive requires the headers-more-nginx-module. Setting the header to an empty value effectively removes it.
For a Node.js HTTP/2 proxy — common in serverless frameworks and custom gateways:
import http2 from 'node:http2';
function proxyResponse(clientStream, upstreamHeaders, method) {
const headers = { ...upstreamHeaders };
// Strip content-length from HEAD responses over HTTP/2
if (method === 'HEAD' && headers['content-length']) {
delete headers['content-length'];
}
clientStream.respond(headers, { endStream: method === 'HEAD' });
}
The critical detail: set endStream: true when responding to HEAD. This sends the END_STREAM flag on the HEADERS frame, telling the client no DATA frames will follow.
For a Rust proxy using hyper — the same library Pingora and many production proxies build on:
use hyper::{Request, Response, Method, body::Bytes};
use http::header::CONTENT_LENGTH;
fn fix_head_response(
req: &Request<()>,
mut resp: Response<Bytes>,
) -> Response<Bytes> {
if req.method() == Method::HEAD {
resp.headers_mut().remove(CONTENT_LENGTH);
// Replace body with empty to signal END_STREAM
*resp.body_mut() = Bytes::new();
}
resp
}
In hyper, setting an empty body ensures the response ends cleanly. The HTTP/2 codec sends END_STREAM on the HEADERS frame when there's no body, which is exactly what the client expects for HEAD.
We hit this exact issue while building the proxy layer in Temps. Our uptime monitoring — which sends HEAD requests every 30 seconds to check app health — started reporting intermittent timeouts. But only for apps served over HTTP/2.
The symptoms were misleading. Here's how it played out:
nghttp. The HEADERS frame includes content-length: 12847 but no END_STREAM flag, and no DATA frame follows. The client sits there waiting.content-length. Timeouts dropped to zero immediately.The frustrating part? This only affected HTTP/2 connections. HTTP/1.1 clients handled it fine. So the bug appeared intermittent — depending on which protocol the monitoring tool negotiated.
Most developers test with curl, which uses HTTP/1.1 by default. You have to explicitly pass --http2 to trigger the bug. Browser DevTools don't show HEAD requests in normal browsing. And the timeout looks like a network issue, not a proxy bug.
According to the HTTP/2 FAQ maintained by the IETF HTTP Working Group, HTTP/2 implementations should be tested with dedicated tools — not just browsers. A curl -I over HTTP/2 is the fastest way to check, but frame-level inspection reveals the full picture.
# Send HEAD over HTTP/2 and check for content-length
curl -I --http2 -s -o /dev/null -w "%{http_code}" https://your-app.com/
# Verbose output shows headers — look for content-length
curl -I --http2 -v https://your-app.com/ 2>&1 | grep -i content-length
If you see content-length in the response, your proxy is passing it through. That doesn't guarantee a hang — it depends on the client — but it's a latent bug waiting to trigger.
nghttp is the gold standard for HTTP/2 debugging. It shows you the actual frames on the wire:
# Install nghttp2
# macOS: brew install nghttp2
# Ubuntu: apt install nghttp2-client
# Send HEAD and inspect frames
nghttp -vn --no-dep https://your-app.com/ -H ':method: HEAD'
Look for these in the output:
HEADERS frame with END_STREAM flag — this means the proxy correctly signaled no body.content-length header in the HEADERS frame — if present without END_STREAM, the client may hang.DATA frame after a HEAD — this is a protocol violation.Add method-specific timeout tracking to your proxy logs:
# Find HEAD requests with abnormal response times
grep "HEAD" /var/log/nginx/access.log | awk '$NF > 5.0 {print}'
Any HEAD request taking more than a few hundred milliseconds is suspicious. HEAD responses should be faster than GET since there's no body to transmit.
No. HTTP/1.1 clients handle content-length on HEAD responses correctly because the protocol's sequential, connection-oriented nature makes it unambiguous. The client knows it sent HEAD, so it doesn't wait for body data regardless of what content-length says. According to RFC 9110, both HTTP/1.1 and HTTP/2 clients SHOULD handle this — but HTTP/2's multiplexed streams make the edge case far more likely to cause hangs.
Only when the downstream connection is HTTP/2 or HTTP/3. For HTTP/1.1, keeping content-length on HEAD responses is actually useful — it tells the client the resource size without downloading it. Over 60% of traffic uses HTTP/2+, so in practice you'll strip it more often than not. But don't blanket-remove it for all protocols.
Major CDNs like Cloudflare, Fastly, and AWS CloudFront strip content-length from HEAD responses on their HTTP/2 edge. They've dealt with this issue at scale. If you're behind a CDN, you're likely protected. But if your origin also serves HTTP/2 directly — for API endpoints, internal services, or bypass routes — you still need the fix at the origin proxy layer.
No — it causes timeouts, not corruption. The worst case is a stalled connection that eventually times out. No actual data is misdelivered. But the timeouts can cascade: a monitoring tool that hangs on HEAD may mark your service as down, triggering false alerts and potentially automated failover actions.
Yes. HTTP/3 inherits HTTP/2's multiplexing model over QUIC streams. The same content-length ambiguity applies. If you're stripping it for HTTP/2, apply the same logic for HTTP/3. The IETF's RFC 9114 (HTTP/3) follows the same framing semantics as HTTP/2 for header-only responses.
HEAD requests are deceptively simple. The method has existed since HTTP/1.0, and most developers assume their proxy handles it correctly. But HTTP/2's multiplexed streams turn a harmless content-length header into a connection-stalling bug that's hard to diagnose and easy to miss.
The fix takes three lines of code: check the method, check the protocol, strip the header. Test it with curl -I --http2 and verify with nghttp frame inspection.
If you're running a reverse proxy in production — especially a custom one built on hyper, h2, or similar libraries — add this check today. Your monitoring tools, CDN prefetchers, and API clients will thank you.
[IMAGE: HTTP/2 HEAD request flow diagram showing HEADERS frame with and without content-length — search terms: http2 request response flow diagram network protocol]
[CHART: Bar chart — HEAD request timeout rates before and after content-length stripping — source: internal Temps monitoring data]