March 12, 2026 (1mo ago)
Written by Temps Team
Last updated March 12, 2026 (1mo ago)
FullStory charges $300 to $2,000 per month depending on session volume. Hotjar's paid plans start at $39/mo but cap recordings at 100 sessions per day on the basic tier. Both tools work the same way: they inject a JavaScript snippet into your site, record every user interaction, and ship that data — including form inputs, mouse movements, and page content — to their servers.
That's a privacy problem and a cost problem wrapped in one.
Session replay is genuinely useful for debugging UX issues. But the dominant tools force you into a tradeoff: pay hundreds a month and accept that raw user behavior data lives on someone else's infrastructure, or go without replay entirely. This guide covers how session replay actually works under the hood, what the open-source alternatives look like, and how to add replay to your app without sending data to a third party.
TL;DR: Session replay tools like FullStory ($300-2,000/mo) and Hotjar ($39+/mo) send raw DOM data and user interactions to third-party servers, creating GDPR liability. You can self-host session replay using rrweb — an open-source library with over 17,000 GitHub stars — and keep all data on your own infrastructure. Temps includes this as a built-in feature with zero extra setup.
Session replay captures a reconstruction of a user's experience by recording DOM mutations, mouse coordinates, scroll positions, and console output. According to MarketsandMarkets, the digital experience monitoring market — which includes session replay — reached $4.2 billion in 2024 and is projected to grow at 15.3% CAGR through 2029. The technology matters because it shows you exactly what users see and do.
Session replay doesn't capture video. It's much lighter than that. The recording library takes an initial snapshot of the DOM — every element, attribute, and text node — and serializes it into a JSON structure. From that point on, it watches for changes.
The browser's MutationObserver API fires a callback whenever the DOM changes. A new element appears? Recorded. Text content updates? Recorded. An attribute changes? Recorded. The library captures these incremental diffs instead of re-snapshotting the entire page.
Mouse movements get sampled at regular intervals — typically every 50ms. Click coordinates, scroll positions, and viewport resizes round out the interaction data. Some libraries also capture console.log, console.error, and network requests.
The use case isn't vanity. Session replay answers questions that no other tool can:
Bug reports from users are notoriously incomplete. Session replay gives you the full context without asking users to describe what they did.
[IMAGE: Diagram showing DOM snapshot to incremental mutations to replay — search: "session replay DOM recording architecture diagram"]
Not every app needs it. Session replay adds a non-trivial script to your page (usually 50-100KB) and generates storage costs. It's worth it in specific situations.
High checkout or conversion funnel abandonment. "12% of users drop off at step 3" is an analytics fact. Why they drop off requires watching them.
Vague bug reports. "It just doesn't work" tells you nothing. A session replay of that user shows you exactly what they clicked and what the page showed in response.
Onboarding confusion. Where do new users get stuck? Watching 20 onboarding sessions tells you more than most quantitative analyses.
Accessibility issues. Keyboard-only navigation problems, tab order issues, and focus traps show up clearly in replays in ways that automated tests miss.
Less useful for high-traffic content sites where most pages are informational and user behavior is predictable. More useful for apps with complex workflows, multi-step forms, or frequent user-reported "broken" experiences.
Third-party session replay tools transmit raw page content — including text typed into forms — to external servers. The French data protection authority CNIL fined Criteo 40 million euros in 2023 for tracking without proper consent. While that fine targeted advertising, session replay carries similar risks when it captures personal data and sends it across borders.
When a replay tool serializes the DOM, it captures everything visible on the page. That includes:
Even if you mask input fields, the rendered text on the page still gets recorded. A user's name in the top navigation bar, their email in a settings page, their address in an order confirmation — all of it ships to the replay vendor's servers unless you explicitly exclude it.
GDPR's Article 5(1)(c) requires data minimization — you should only process personal data that's necessary for a specific purpose. Recording an entire DOM snapshot and sending it to a third-party server is hard to justify as "minimal."
Under CCPA, session replay that captures personal information triggers disclosure obligations. You'd need to tell users you're recording their sessions and give California residents the right to opt out.
But here's the thing most teams miss: if the replay data never leaves your server, the compliance picture changes dramatically. Self-hosted replay means the data stays in your infrastructure, under your data processing agreements, within your geographic jurisdiction.
Using FullStory or Hotjar in the EU means you need a cookie consent banner. Both tools set cookies and process personal data. Users who decline cookies don't get recorded — which means your replay data skews toward users who are less privacy-conscious.
We've found that consent acceptance rates for tracking tools typically range from 30-50% in European markets. That means you're only seeing half your users' behavior at best.
The rrweb library — the most widely used open-source session replay engine — has over 17,000 stars on GitHub and forms the foundation for most self-hosted replay solutions. Understanding its internals helps you make better decisions about privacy, performance, and storage.
When recording starts, the library walks the entire DOM tree. Every element gets assigned a unique numeric ID. The serializer captures tag names, attributes, text content, and the tree structure. Stylesheets get inlined or referenced.
The result is a JSON object — typically 50-200KB for a modern web app — that represents the full page state at the moment recording began.
// Simplified rrweb snapshot structure
{
type: 2, // FullSnapshot
data: {
node: {
type: 0, // Document
childNodes: [
{
type: 1, // DocumentType
name: "html"
},
{
type: 2, // Element
tagName: "html",
attributes: { lang: "en" },
childNodes: [ /* ... recursive */ ]
}
]
}
},
timestamp: 1710000000000
}
After the initial snapshot, MutationObserver takes over. Every DOM change produces an incremental event:
These diffs are small. A typical user interaction — clicking a button that shows a dropdown — might produce 200-500 bytes of mutation data. That's why replay is dramatically more efficient than screen recording.
Mouse position gets sampled at ~50ms intervals and stored as [x, y, timestamp] tuples. Clicks record the target element ID and coordinates. Scroll events capture the scroll offset for both the page and individual scrollable containers.
Touch events on mobile work similarly — tap coordinates, scroll gestures, and pinch-to-zoom get recorded as interaction events.
Raw replay events pile up fast. A 5-minute session might generate 10,000+ events. Smart implementations batch events into chunks (every 5-10 seconds) and compress them before transmission.
In our testing, gzip compression reduces session replay payloads by 85-92%. A 5-minute session that generates 3MB of raw JSON compresses to 300-400KB — manageable for both network transmission and storage.
Advanced replay setups also intercept:
console.log, console.warn, console.error — invaluable for debuggingXMLHttpRequest and fetch — API calls with status codes and timingThis transforms session replay from a UX tool into a debugging tool. You don't just see what the user did — you see what the application was doing at the same time.
[IMAGE: Flow diagram showing recording pipeline from DOM to MutationObserver to batching to server — search: "session replay recording pipeline architecture"]
Several open-source projects offer session replay without third-party data transfer. According to PostHog, over 80,000 companies use their platform, with session replay as one of their most adopted features. Here's how the main options compare.
rrweb is a library, not a product. It gives you the recording and playback primitives — rrweb.record() to capture events, rrweb-player to replay them. Everything else is your responsibility: transport, storage, search, playback UI, and privacy masking.
Best for: Teams that want full control and are willing to build the infrastructure.
OpenReplay is a full-stack session replay platform you can self-host. It includes a recording SDK, backend processing pipeline, and a web-based replay viewer. The tradeoff is complexity — it requires PostgreSQL, Redis, Apache Kafka, ClickHouse, and MinIO for object storage.
Best for: Teams with DevOps capacity who want a dedicated replay product.
PostHog bundles session replay into a broader product analytics platform. Their self-hosted option runs on Kubernetes via Helm charts. Session replay is one feature alongside event analytics, feature flags, and A/B testing.
Best for: Teams that want an all-in-one analytics platform and have Kubernetes infrastructure.
| Feature | rrweb | OpenReplay | PostHog (self-hosted) |
|---|---|---|---|
| Type | Library | Full platform | Analytics suite |
| Min RAM | N/A (client-side) | ~8GB | ~16GB (Kubernetes) |
| Storage backend | DIY | ClickHouse + MinIO | ClickHouse + Kafka |
| Privacy masking | data-rr-mask | Built-in rules | Built-in rules |
| Playback UI | Basic player | Full dashboard | Full dashboard |
| Setup complexity | Build everything | Docker Compose | Helm + Kubernetes |
| Dependencies | None | 5+ services | 10+ services |
What jumps out here is the infrastructure cost. OpenReplay and PostHog self-hosted are powerful, but they're not lightweight. Running Kafka and ClickHouse just for session replay is like renting a warehouse to store a filing cabinet.
Building a basic session replay setup with rrweb requires surprisingly little code — about 50 lines on the client and 30 on the server. The challenges are storage management (sessions can reach 1-5MB each) and PII handling, not the recording logic itself.
In our benchmarks, an average 3-minute session on a React dashboard app produces approximately 8,000 rrweb events totaling 2.1MB uncompressed, or 280KB after gzip compression. Sessions on content-heavy pages with fewer interactions average 1.2MB uncompressed.
Install rrweb and start recording:
npm install rrweb
import { record } from 'rrweb';
const events: any[] = [];
let stopRecording: (() => void) | undefined;
// Start recording
stopRecording = record({
emit(event) {
events.push(event);
},
// Mask all input fields by default
maskAllInputs: true,
// Block elements with this class from being recorded
blockClass: 'rr-block',
// Sample mouse movements every 50ms (default)
sampling: {
mousemove: 50,
scroll: 150,
},
});
// Batch send every 10 seconds
setInterval(() => {
if (events.length === 0) return;
const batch = events.splice(0, events.length);
fetch('/api/replay/events', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
sessionId: getSessionId(),
events: batch,
}),
// Use keepalive for tab close / navigation
keepalive: true,
});
}, 10_000);
A minimal Express endpoint to receive and store replay data:
import express from 'express';
import { writeFile, mkdir } from 'fs/promises';
import { gzipSync } from 'zlib';
const app = express();
app.use(express.json({ limit: '5mb' }));
app.post('/api/replay/events', async (req, res) => {
const { sessionId, events } = req.body;
if (!sessionId || !Array.isArray(events)) {
return res.status(400).json({ error: 'Invalid payload' });
}
const dir = `./replay-data/${sessionId}`;
await mkdir(dir, { recursive: true });
const compressed = gzipSync(JSON.stringify(events));
const filename = `${dir}/${Date.now()}.json.gz`;
await writeFile(filename, compressed);
res.status(204).end();
});
rrweb provides rrweb-player for replay:
import rrwebPlayer from 'rrweb-player';
import 'rrweb-player/dist/style.css';
// Load all event batches for a session
const events = await loadSessionEvents(sessionId);
new rrwebPlayer({
target: document.getElementById('player-container')!,
props: {
events,
width: 1280,
height: 720,
autoPlay: true,
showController: true,
},
});
Building this yourself sounds straightforward, but several problems surface quickly:
keepalive flag on fetch has a 64KB limit per request, so large batches fail silently on page unload.<style> tags at runtime. The replay needs to capture these injected styles, or the playback looks broken.These problems are solvable, but each one adds engineering time. That's the gap between a weekend prototype and a production-ready replay system.
Even with self-hosted replay, you still need to mask sensitive content. The Princeton Web Transparency and Accountability Project found that session replay scripts on popular websites captured credit card numbers and passwords in plain text. The lesson holds regardless of where the data is stored.
rrweb's maskAllInputs: true option replaces all input values with asterisks during recording. This catches:
Always enable this by default. Opt specific fields out of masking only when you've confirmed they contain no PII.
Input masking isn't enough. Think about what's rendered as plain text on your pages:
For these, use rrweb's data-rr-mask attribute on container elements:
<!-- Mask the entire user profile section -->
<div data-rr-mask>
<h2>John Doe</h2>
<p>john@example.com</p>
<p>123 Main St, Springfield</p>
</div>
Some page regions shouldn't be recorded at all — not even as masked content. Use data-rr-block to replace an element with an empty placeholder:
<!-- Block the payment form entirely -->
<form data-rr-block class="payment-form">
<!-- Nothing in here gets serialized -->
</form>
The difference matters: masked elements show asterisks (the user can see something was there), while blocked elements disappear completely from the recording.
Here's what we'd recommend as a starting point:
maskAllInputs globally — no exceptions by defaultdata-rr-mask to any component that renders user-specific data — profiles, settings, account infodata-rr-block to payment forms, medical records, and legal documentsMost teams apply masking to input fields and consider the job done. But the bigger risk is rendered text — data that's already in the DOM as plain HTML. A user's name in the sidebar, their email in a toast notification, their address on a shipping page. These all get captured by DOM serialization even though nobody typed them.
Session replay adds overhead — but less than you'd expect. According to rrweb's documentation, recording adds 1-3% CPU overhead on modern hardware. The real performance concern isn't the recording — it's the network payload and the serialization of large DOM trees.
The initial DOM serialization is the most expensive operation. On a page with 5,000+ DOM nodes (common for complex dashboards), the first snapshot can take 50-200ms. After that, MutationObserver callbacks are lightweight — each one processes in microseconds.
Memory usage depends on how you buffer events. Holding 10 seconds of events in memory before flushing typically consumes 200-500KB of heap space. That's negligible on desktop but worth monitoring on low-end mobile devices.
The batched payload sent every 10 seconds ranges from 5KB (quiet pages) to 200KB (heavily interactive dashboards). Over a typical session, that adds up to 1-5MB of upload bandwidth per user.
Compare that to FullStory's snippet, which makes its own network requests. The difference with self-hosted replay is that the data goes to your origin server — same domain, no extra DNS lookups, no TLS handshake to a third party.
Not every session needs recording. Smart sampling reduces overhead and storage:
// Record 10% of sessions
const shouldRecord = Math.random() < 0.1;
if (shouldRecord) {
record({ emit(event) { /* ... */ } });
}
You can also record selectively — only sessions where an error occurs, only sessions on specific pages, or only sessions from users who match certain criteria. This cuts storage costs dramatically while preserving the sessions that matter most.
[IMAGE: Performance comparison chart showing CPU overhead of session replay tools — search: "session replay CPU overhead benchmark comparison"]
Temps includes session replay as a built-in platform feature — no separate service, no additional database, no ClickHouse or Kafka cluster. Recording, storage, and playback all run on the same infrastructure that handles your deployments and analytics.
Session replay, web analytics, error tracking, and deployment hosting all share the same infrastructure requirements — a server, a database, and an HTTP endpoint. Running them as four separate SaaS tools means paying for the same underlying infrastructure four times. Combining them into a single platform isn't just convenient — it eliminates an entire category of redundant cost.
Add session replay to a React or Next.js app:
import { TempsAnalytics } from '@temps-sdk/react-analytics';
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<TempsAnalytics sessionReplay />
</body>
</html>
);
}
For non-React apps:
<script
defer
src="https://your-temps-instance.com/t.js"
data-domain="yoursite.com"
data-replay="true"
></script>
That's it. The sessionReplay flag (or data-replay="true") enables recording with maskAllInputs turned on by default. Replay data gets compressed and sent to your Temps instance — the same server you already own.
Building on rrweb yourself works fine for a proof-of-concept. Temps handles the parts that take weeks to build properly:
maskAllInputs by default, with configurable masking rules via the dashboardThis is the fundamental difference from FullStory and Hotjar. Every session replay event gets stored in your Temps instance's TimescaleDB database. The data never transits through a third-party server. If your server is in Frankfurt, your replay data is in Frankfurt.
For teams operating under GDPR, this eliminates the data transfer question entirely. No Standard Contractual Clauses needed. No DPA to sign with a replay vendor. No risk of an adequacy decision invalidating your data flows.
If you're self-hosting, the storage math matters. At 200KB average per session (after compression), 10,000 sessions/month is 2GB. 100,000 sessions/month is 20GB. A year of sessions at that volume is 240GB — about $5-6/mo on object storage.
For most teams, storage cost for self-hosted replay is negligible compared to SaaS pricing. The main operational cost is running the service and maintaining the replay infrastructure. Tools like OpenReplay abstract that away; raw rrweb requires you to build it.
The question to ask: does the debugging and UX value of replay justify the cost (money, storage, operational overhead, and privacy audit work) at your current traffic level? For most teams under 1,000 sessions/day, the answer is yes if you're running a complex product. For simple sites, probably not.
A compressed 5-minute session typically uses 250-400KB of storage. At 1,000 sessions per day with 30-day retention, expect roughly 8-12GB of storage per month. Sampling at 10% reduces this to under 1GB. TimescaleDB's built-in compression can reduce storage further — Timescale reports up to 95% compression ratios for time-series data.
The recording overhead is minimal — According to rrweb's benchmarks, recording adds 1-3% CPU impact during active recording. The initial DOM serialization takes 50-200ms depending on page complexity, but it happens once per page load and doesn't block rendering. The script itself adds approximately 40KB gzipped to your bundle. Network overhead averages 1-5MB per session, sent in small batches via the Beacon API.
Yes, if the data stays on your own infrastructure and you implement proper masking. Self-hosted session replay with input masking and PII-aware DOM blocking avoids the third-party data transfer issues that triggered enforcement actions against tools like Google Analytics. The French CNIL has specifically noted that tools processing data on the site owner's own servers present a different compliance profile than those transferring data to third parties. Always mask input fields and sensitive rendered content.
FullStory offers advanced features like frustration detection (rage clicks, dead clicks), searchable session metadata, and AI-powered insights. A self-hosted setup with rrweb gives you recording, playback, and privacy masking — the core functionality. You won't get machine learning features out of the box, but you also won't pay $300-2,000/mo or send user data to a third party. For most teams, the core replay functionality covers 80-90% of the debugging use cases. For a side-by-side comparison of pricing and features across 6 tools, see 6 Best FullStory Alternatives.
Session replay is too useful to skip. But it doesn't require a $300/mo SaaS subscription or shipping raw DOM data to a third-party server. The open-source tooling — particularly rrweb — is mature enough to build on, and the self-hosted options keep improving.
If you want the debugging power of session replay without the infrastructure headaches, the simplest path is a platform that bundles replay alongside your deployment pipeline. Record sessions, play them back in the same dashboard where you manage deployments, and keep every byte of data on your own server.
Temps includes session replay, web analytics, error tracking, and deployment tooling in a single binary. Install it, enable replay with one flag, and you're recording — no FullStory invoice, no Hotjar session caps, no data leaving your infrastructure.
curl -fsSL https://temps.sh/install.sh | bash