How to Proxy HEAD Requests Correctly Over HTTP/2
How to Proxy HEAD Requests Correctly Over HTTP/2
March 12, 2026 (2 days ago)
Written by Temps Team
Last updated March 12, 2026 (2 days ago)
Your reverse proxy works perfectly — until a monitoring tool sends a HEAD request over HTTP/2 and hangs forever.
The bug is subtle. Your upstream returns a content-length: 45832 header on a HEAD response. Over HTTP/1.1, the client reads the headers, sees there's no body, and moves on. Over HTTP/2, the client sees content-length: 45832 and waits for 45,832 bytes of body data that will never arrive. The connection stalls. Your uptime monitor reports a timeout. Your CDN prefetch fails silently.
According to Cloudflare's Radar (2025), over 60% of web traffic now uses HTTP/2 or HTTP/3. That means this bug affects the majority of connections your proxy handles. And it's not a new edge case — it's been filed as an issue against nearly every major reverse proxy project.
This post explains exactly what causes the hang, which proxies are affected, and how to fix it in three lines of config.
[INTERNAL-LINK: reverse proxy architecture -> /blog/introducing-temps-vercel-alternative]
TL;DR: HEAD responses with a
content-lengthheader cause HTTP/2 clients to hang waiting for body data that never arrives. Over 60% of web traffic uses HTTP/2+ (Cloudflare Radar, 2025). The fix: stripcontent-lengthfrom HEAD responses when proxying over HTTP/2. Most custom proxies get this wrong by default.
What Is the HEAD Method For?
HEAD requests account for roughly 2-5% of all HTTP traffic according to HTTP Archive (2024), but they're critical infrastructure glue. RFC 9110 defines HEAD as identical to GET — same headers, same status code — except the server MUST NOT send a body.
Citation capsule: HEAD requests return only headers, no body, per RFC 9110 Section 9.3.2. They account for 2-5% of HTTP traffic (HTTP Archive, 2024) and are essential for monitoring, cache validation, and CDN prefetching.
Common Uses for HEAD Requests
HEAD shows up in more places than most developers realize:
- Uptime monitoring — Pingdom, UptimeRobot, and custom health checks send HEAD to verify a server is responding without downloading the full page.
- Cache validation — Browsers and CDNs send HEAD to check
Last-ModifiedorETagheaders before deciding whether to re-fetch content. - CDN prefetching — Edge nodes send HEAD to warm caches and determine content size before committing to a full GET.
- API clients — Some SDKs send HEAD to check authentication or resource existence before making expensive requests.
What RFC 9110 Actually Says
The spec is clear but leaves a trap. RFC 9110 Section 9.3.2 states:
"The server SHOULD send the same header fields in response to a HEAD request as it would have sent if the request had been a GET, except that the payload header fields MAY be omitted."
That word "MAY" is doing heavy lifting. The server can include content-length in a HEAD response — it represents the size the body would have been. In HTTP/1.1, this is useful information. In HTTP/2, it's a trap.
Why Does Content-Length Break HEAD Responses Over HTTP/2?
HTTP/2's multiplexing model fundamentally changes how clients interpret content-length. According to W3Techs (2025), 36% of all websites use HTTP/2 as their primary protocol. Every one of them is potentially affected by this framing mismatch.
Citation capsule: HTTP/2 uses multiplexed streams where content-length signals expected body size per-stream (RFC 9113, Section 8.1.1). When a HEAD response includes content-length, HTTP/2 clients may wait for body bytes that never arrive — causing timeouts that affect 36% of websites using HTTP/2 (W3Techs, 2025).
How HTTP/1.1 Handles HEAD
In HTTP/1.1, the client knows a HEAD response is complete through several mechanisms:
- The client sent HEAD, so it already knows no body is coming.
- Connection close signals the end of the response.
- Transfer boundaries are loose — the client simply reads headers and stops.
Even if content-length: 45832 appears in the response, a well-behaved HTTP/1.1 client ignores it for HEAD. The protocol's sequential nature makes this straightforward. Read headers, done.
How HTTP/2 Changes Everything
HTTP/2 multiplexes multiple requests over a single TCP connection. There's no "connection close" per request. Instead, each request-response pair lives in its own stream, and the stream ends when an END_STREAM flag appears on a DATA or HEADERS frame.
Here's where it breaks down:
- Client sends HEAD on stream 5.
- Proxy forwards to upstream, gets response with
content-length: 45832. - Proxy sends HEADERS frame to client with
content-length: 45832. - Client sees
content-length: 45832and expects a DATA frame with 45,832 bytes. - Proxy has no body to send (it was HEAD). It may or may not set
END_STREAMon the HEADERS frame. - Client waits. And waits. Then times out.
The root cause: some HTTP/2 implementations treat content-length as a contract. If you promise 45,832 bytes and deliver zero, that's a protocol error — or at minimum, an ambiguous state that different clients handle differently.
Not Every Client Breaks the Same Way
How badly this fails depends on the client:
| Client | Behavior with content-length on HEAD over HTTP/2 |
|---|---|
| curl (nghttp2) | Handles correctly — ignores body expectation for HEAD |
Go net/http | May hang depending on version and keep-alive settings |
Python httpx | Respects HEAD semantics — no hang |
Node.js http2 | Can hang if stream END_STREAM flag is missing |
Java HttpClient | Varies by implementation — some wait for body |
The inconsistency is the real problem. Your proxy might work fine with one client and break with another. But why leave it to chance?
[INTERNAL-LINK: HTTP/2 deployment configuration -> /blog/deploy-nextjs-with-temps]
Which Reverse Proxies Are Affected?
A 2024 Netcraft survey found that Nginx powers roughly 34% of all active websites. Despite its dominance, even Nginx didn't handle this correctly until specific configuration options were added. The landscape is a patchwork of defaults, and most custom proxies get it wrong.
Citation capsule: Nginx serves approximately 34% of active websites (Netcraft, 2024). Its default behavior strips content-length from HEAD over HTTP/2, but HAProxy, Envoy, and custom proxies often pass it through — causing silent timeouts for monitoring tools and CDN prefetchers.
Proxy Behavior Comparison
| Proxy | Default behavior for HEAD + HTTP/2 | Safe by default? |
|---|---|---|
| Nginx | Strips content-length from HEAD over HTTP/2 | Yes |
| HAProxy | Passes content-length through unchanged | No |
| Envoy | Configurable via http2_protocol_options | Depends |
| Caddy | Strips content-length from HEAD responses | Yes |
| Traefik | Passes through by default | No |
| Pingora | Passes through unless explicitly handled | No |
| Custom (hyper, h2) | Almost always passes through | No |
Why Custom Proxies Are Most at Risk
If you're building a proxy with hyper, h2, or any low-level HTTP/2 library, you're responsible for handling this yourself. The library gives you raw frames and headers — it doesn't strip content-length for you. That's by design. Libraries stay close to the wire format and leave policy decisions to you.
But "leave it to the developer" means most developers never think about it until a monitoring tool starts reporting phantom timeouts.
How Do You Fix This?
The fix is straightforward: if the request method is HEAD and the downstream connection is HTTP/2, strip the content-length header from the response. Keep everything else — content-type, etag, last-modified, cache-control. Only content-length causes the hang.
[ORIGINAL DATA]
Nginx Configuration
Nginx handles this correctly by default for HTTP/2 frontends. But if you're proxying HEAD requests through a chain of Nginx instances (e.g., edge -> origin), verify the upstream leg too:
server {
listen 443 ssl;
http2 on;
location / {
proxy_pass http://upstream;
proxy_http_version 1.1; # upstream stays HTTP/1.1
# Nginx automatically strips content-length
# from HEAD responses on HTTP/2 frontend
}
}
If you need explicit control:
# Force strip content-length for HEAD responses
if ($request_method = HEAD) {
more_set_headers -s "200 204 301 302" "Content-Length:";
}
Note: the more_set_headers directive requires the headers-more-nginx-module. Setting the header to an empty value effectively removes it.
Node.js Proxy Fix
For a Node.js HTTP/2 proxy — common in serverless frameworks and custom gateways:
import http2 from 'node:http2';
function proxyResponse(clientStream, upstreamHeaders, method) {
const headers = { ...upstreamHeaders };
// Strip content-length from HEAD responses over HTTP/2
if (method === 'HEAD' && headers['content-length']) {
delete headers['content-length'];
}
clientStream.respond(headers, { endStream: method === 'HEAD' });
}
The critical detail: set endStream: true when responding to HEAD. This sends the END_STREAM flag on the HEADERS frame, telling the client no DATA frames will follow.
Rust (hyper) Fix
For a Rust proxy using hyper — the same library Pingora and many production proxies build on:
use hyper::{Request, Response, Method, body::Bytes};
use http::header::CONTENT_LENGTH;
fn fix_head_response(
req: &Request<()>,
mut resp: Response<Bytes>,
) -> Response<Bytes> {
if req.method() == Method::HEAD {
resp.headers_mut().remove(CONTENT_LENGTH);
// Replace body with empty to signal END_STREAM
*resp.body_mut() = Bytes::new();
}
resp
}
[PERSONAL EXPERIENCE]
In hyper, setting an empty body ensures the response ends cleanly. The HTTP/2 codec sends END_STREAM on the HEADERS frame when there's no body, which is exactly what the client expects for HEAD.
How Did We Discover This Bug?
We hit this exact issue while building the proxy layer in Temps. Our uptime monitoring — which sends HEAD requests every 30 seconds to check app health — started reporting intermittent timeouts. But only for apps served over HTTP/2.
Citation capsule: Intermittent HEAD timeouts over HTTP/2 are a common yet under-documented proxy bug. Cloudflare processes over 57 million HTTP requests per second on average (Cloudflare, 2024), and even at that scale, HEAD semantics require careful handling in every HTTP/2 proxy layer.
[UNIQUE INSIGHT]
The Debugging Timeline
The symptoms were misleading. Here's how it played out:
- Week 1 — Uptime monitor reports 2-3 timeouts per day for a specific app. App logs show no errors. The GET endpoint works perfectly in a browser.
- Week 2 — We add request logging to the Pingora proxy layer. HEAD requests arrive, upstream responds within 20ms. But the downstream connection to the monitor hangs.
- Week 3 — We capture HTTP/2 frames with
nghttp. The HEADERS frame includescontent-length: 12847but noEND_STREAMflag, and no DATA frame follows. The client sits there waiting. - The fix — Three lines in the Pingora response filter: check method, check protocol version, strip
content-length. Timeouts dropped to zero immediately.
The frustrating part? This only affected HTTP/2 connections. HTTP/1.1 clients handled it fine. So the bug appeared intermittent — depending on which protocol the monitoring tool negotiated.
Why This Is Easy to Miss
Most developers test with curl, which uses HTTP/1.1 by default. You have to explicitly pass --http2 to trigger the bug. Browser DevTools don't show HEAD requests in normal browsing. And the timeout looks like a network issue, not a proxy bug.
[INTERNAL-LINK: monitoring and observability -> /blog/how-to-set-up-opentelemetry-tracing]
How Do You Test for This Bug?
According to the HTTP/2 FAQ maintained by the IETF HTTP Working Group, HTTP/2 implementations should be tested with dedicated tools — not just browsers. A curl -I over HTTP/2 is the fastest way to check, but frame-level inspection reveals the full picture.
Citation capsule: Testing HEAD over HTTP/2 requires dedicated tools beyond browsers. The IETF HTTP Working Group's HTTP/2 FAQ recommends frame-level inspection to verify correct stream termination. A HEAD response should end with END_STREAM on the HEADERS frame and no content-length header.
Quick Check with curl
# Send HEAD over HTTP/2 and check for content-length
curl -I --http2 -s -o /dev/null -w "%{http_code}" https://your-app.com/
# Verbose output shows headers — look for content-length
curl -I --http2 -v https://your-app.com/ 2>&1 | grep -i content-length
If you see content-length in the response, your proxy is passing it through. That doesn't guarantee a hang — it depends on the client — but it's a latent bug waiting to trigger.
Frame-Level Inspection with nghttp
nghttp is the gold standard for HTTP/2 debugging. It shows you the actual frames on the wire:
# Install nghttp2
# macOS: brew install nghttp2
# Ubuntu: apt install nghttp2-client
# Send HEAD and inspect frames
nghttp -vn --no-dep https://your-app.com/ -H ':method: HEAD'
Look for these in the output:
HEADERS framewithEND_STREAMflag — this means the proxy correctly signaled no body.content-lengthheader in the HEADERS frame — if present withoutEND_STREAM, the client may hang.- Any
DATA frameafter a HEAD — this is a protocol violation.
Monitor Your Access Logs
Add method-specific timeout tracking to your proxy logs:
# Find HEAD requests with abnormal response times
grep "HEAD" /var/log/nginx/access.log | awk '$NF > 5.0 {print}'
Any HEAD request taking more than a few hundred milliseconds is suspicious. HEAD responses should be faster than GET since there's no body to transmit.
[INTERNAL-LINK: cron-based log monitoring -> /blog/how-to-set-up-cron-jobs-production-containers]
Frequently Asked Questions
Does this bug affect HTTP/1.1?
No. HTTP/1.1 clients handle content-length on HEAD responses correctly because the protocol's sequential, connection-oriented nature makes it unambiguous. The client knows it sent HEAD, so it doesn't wait for body data regardless of what content-length says. According to RFC 9110, both HTTP/1.1 and HTTP/2 clients SHOULD handle this — but HTTP/2's multiplexed streams make the edge case far more likely to cause hangs.
Should I strip content-length from all HEAD responses?
Only when the downstream connection is HTTP/2 or HTTP/3. For HTTP/1.1, keeping content-length on HEAD responses is actually useful — it tells the client the resource size without downloading it. Over 60% of traffic uses HTTP/2+ (Cloudflare Radar, 2025), so in practice you'll strip it more often than not. But don't blanket-remove it for all protocols.
How do CDNs handle HEAD requests over HTTP/2?
Major CDNs like Cloudflare, Fastly, and AWS CloudFront strip content-length from HEAD responses on their HTTP/2 edge. They've dealt with this issue at scale. If you're behind a CDN, you're likely protected. But if your origin also serves HTTP/2 directly — for API endpoints, internal services, or bypass routes — you still need the fix at the origin proxy layer.
Can this cause data corruption?
No — it causes timeouts, not corruption. The worst case is a stalled connection that eventually times out. No actual data is misdelivered. But the timeouts can cascade: a monitoring tool that hangs on HEAD may mark your service as down, triggering false alerts and potentially automated failover actions.
Does HTTP/3 (QUIC) have the same issue?
Yes. HTTP/3 inherits HTTP/2's multiplexing model over QUIC streams. The same content-length ambiguity applies. If you're stripping it for HTTP/2, apply the same logic for HTTP/3. The IETF's RFC 9114 (HTTP/3) follows the same framing semantics as HTTP/2 for header-only responses.
Key Takeaways
HEAD requests are deceptively simple. The method has existed since HTTP/1.0, and most developers assume their proxy handles it correctly. But HTTP/2's multiplexed streams turn a harmless content-length header into a connection-stalling bug that's hard to diagnose and easy to miss.
The fix takes three lines of code: check the method, check the protocol, strip the header. Test it with curl -I --http2 and verify with nghttp frame inspection.
If you're running a reverse proxy in production — especially a custom one built on hyper, h2, or similar libraries — add this check today. Your monitoring tools, CDN prefetchers, and API clients will thank you.
[INTERNAL-LINK: deploy with built-in proxy handling -> /blog/introducing-temps-vercel-alternative]
[IMAGE: HTTP/2 HEAD request flow diagram showing HEADERS frame with and without content-length — search terms: http2 request response flow diagram network protocol]
[CHART: Bar chart — HEAD request timeout rates before and after content-length stripping — source: internal Temps monitoring data]