How to Add Session Replay Without FullStory or Hotjar
How to Add Session Replay Without FullStory or Hotjar
March 12, 2026 (2 days ago)
Written by Temps Team
Last updated March 12, 2026 (2 days ago)
FullStory charges $300 to $2,000 per month depending on session volume. Hotjar's paid plans start at $39/mo but cap recordings at 100 sessions per day on the basic tier. Both tools work the same way: they inject a JavaScript snippet into your site, record every user interaction, and ship that data — including form inputs, mouse movements, and page content — to their servers.
That's a privacy problem and a cost problem wrapped in one.
Session replay is genuinely useful for debugging UX issues. But the dominant tools force you into a tradeoff: pay hundreds a month and accept that raw user behavior data lives on someone else's infrastructure, or go without replay entirely. This guide covers how session replay actually works under the hood, what the open-source alternatives look like, and how to add replay to your app without sending data to a third party.
[INTERNAL-LINK: self-hosted deployment platform with built-in observability -> /blog/introducing-temps-vercel-alternative]
TL;DR: Session replay tools like FullStory ($300-2,000/mo) and Hotjar ($39+/mo) send raw DOM data and user interactions to third-party servers, creating GDPR liability. You can self-host session replay using rrweb — an open-source library with over 17,000 GitHub stars (GitHub, 2025) — and keep all data on your own infrastructure. Temps includes this as a built-in feature with zero extra setup.
What Is Session Replay and Why Does It Matter?
Session replay captures a reconstruction of a user's experience by recording DOM mutations, mouse coordinates, scroll positions, and console output. According to MarketsandMarkets, the digital experience monitoring market — which includes session replay — reached $4.2 billion in 2024 and is projected to grow at 15.3% CAGR through 2029 (MarketsandMarkets, 2024). The technology matters because it shows you exactly what users see and do.
Citation capsule: Session replay records DOM mutations, mouse movements, scroll positions, and console errors to reconstruct a user's experience. The digital experience monitoring market, which includes session replay, reached $4.2 billion in 2024 with a projected 15.3% CAGR through 2029 according to MarketsandMarkets — reflecting strong demand for tools that show what users actually experience.
How Recording Works at a High Level
Session replay doesn't capture video. It's much lighter than that. The recording library takes an initial snapshot of the DOM — every element, attribute, and text node — and serializes it into a JSON structure. From that point on, it watches for changes.
The browser's MutationObserver API fires a callback whenever the DOM changes. A new element appears? Recorded. Text content updates? Recorded. An attribute changes? Recorded. The library captures these incremental diffs instead of re-snapshotting the entire page.
Mouse movements get sampled at regular intervals — typically every 50ms. Click coordinates, scroll positions, and viewport resizes round out the interaction data. Some libraries also capture console.log, console.error, and network requests.
Why Teams Use Session Replay
The use case isn't vanity. Session replay answers questions that no other tool can:
- "Why did 12% of users abandon checkout on step 3?" — Watch the sessions. Maybe a validation error is invisible below the fold.
- "The user says the button doesn't work." — Pull up their session. See exactly what happened.
- "Our error tracker shows 500 errors on /dashboard, but we can't reproduce it." — Replay the session that triggered the error.
Bug reports from users are notoriously incomplete. Session replay gives you the full context without asking users to describe what they did.
[IMAGE: Diagram showing DOM snapshot to incremental mutations to replay — search: "session replay DOM recording architecture diagram"]
What's the Privacy Problem with Third-Party Replay Tools?
Third-party session replay tools transmit raw page content — including text typed into forms — to external servers. The French data protection authority CNIL fined Criteo 40 million euros in 2023 for tracking without proper consent (CNIL, 2023). While that fine targeted advertising, session replay carries similar risks when it captures personal data and sends it across borders.
Citation capsule: Third-party session replay tools transmit raw DOM content — including form inputs and page text — to external servers, creating GDPR and CCPA liability. CNIL's 40 million euro fine against Criteo in 2023 for tracking without consent (CNIL, 2023) demonstrates the financial risk of sending user interaction data to third-party infrastructure without adequate controls.
Your DOM Contains More Than You Think
When a replay tool serializes the DOM, it captures everything visible on the page. That includes:
- Names and email addresses rendered in dashboards
- Partial credit card numbers displayed in confirmation screens
- Health data shown in patient portals
- Messages in chat interfaces
Even if you mask input fields, the rendered text on the page still gets recorded. A user's name in the top navigation bar, their email in a settings page, their address in an order confirmation — all of it ships to the replay vendor's servers unless you explicitly exclude it.
GDPR and CCPA Require Data Minimization
GDPR's Article 5(1)(c) requires data minimization — you should only process personal data that's necessary for a specific purpose. Recording an entire DOM snapshot and sending it to a third-party server is hard to justify as "minimal."
Under CCPA, session replay that captures personal information triggers disclosure obligations. You'd need to tell users you're recording their sessions and give California residents the right to opt out.
But here's the thing most teams miss: if the replay data never leaves your server, the compliance picture changes dramatically. Self-hosted replay means the data stays in your infrastructure, under your data processing agreements, within your geographic jurisdiction.
The Consent Banner Problem
Using FullStory or Hotjar in the EU means you need a cookie consent banner. Both tools set cookies and process personal data. Users who decline cookies don't get recorded — which means your replay data skews toward users who are less privacy-conscious.
We've found that consent acceptance rates for tracking tools typically range from 30-50% in European markets. That means you're only seeing half your users' behavior at best.
[INTERNAL-LINK: GDPR compliance for self-hosted platforms -> /blog/self-hosted-deployments-saas-security]
How Does Session Replay Work Under the Hood?
The rrweb library — the most widely used open-source session replay engine — has over 17,000 stars on GitHub and forms the foundation for most self-hosted replay solutions (GitHub, 2025). Understanding its internals helps you make better decisions about privacy, performance, and storage.
Citation capsule: Session replay engines like rrweb serialize the initial DOM into a JSON snapshot, then use the browser's MutationObserver API to record incremental changes. With over 17,000 GitHub stars, rrweb is the most widely adopted open-source replay library and serves as the recording engine behind tools like PostHog Session Replay and OpenReplay.
Step 1: Initial DOM Serialization
When recording starts, the library walks the entire DOM tree. Every element gets assigned a unique numeric ID. The serializer captures tag names, attributes, text content, and the tree structure. Stylesheets get inlined or referenced.
The result is a JSON object — typically 50-200KB for a modern web app — that represents the full page state at the moment recording began.
// Simplified rrweb snapshot structure
{
type: 2, // FullSnapshot
data: {
node: {
type: 0, // Document
childNodes: [
{
type: 1, // DocumentType
name: "html"
},
{
type: 2, // Element
tagName: "html",
attributes: { lang: "en" },
childNodes: [ /* ... recursive */ ]
}
]
}
},
timestamp: 1710000000000
}
Step 2: Incremental Mutation Recording
After the initial snapshot, MutationObserver takes over. Every DOM change produces an incremental event:
- Node additions — new elements added to the tree
- Node removals — elements removed or hidden
- Attribute changes — class toggles, style updates, data attributes
- Text changes — content updates in text nodes
These diffs are small. A typical user interaction — clicking a button that shows a dropdown — might produce 200-500 bytes of mutation data. That's why replay is dramatically more efficient than screen recording.
Step 3: User Interaction Capture
Mouse position gets sampled at ~50ms intervals and stored as [x, y, timestamp] tuples. Clicks record the target element ID and coordinates. Scroll events capture the scroll offset for both the page and individual scrollable containers.
Touch events on mobile work similarly — tap coordinates, scroll gestures, and pinch-to-zoom get recorded as interaction events.
Step 4: Compression and Batching
Raw replay events pile up fast. A 5-minute session might generate 10,000+ events. Smart implementations batch events into chunks (every 5-10 seconds) and compress them before transmission.
[PERSONAL EXPERIENCE] In our testing, gzip compression reduces session replay payloads by 85-92%. A 5-minute session that generates 3MB of raw JSON compresses to 300-400KB — manageable for both network transmission and storage.
Step 5: Network and Console Capture
Advanced replay setups also intercept:
console.log,console.warn,console.error— invaluable for debuggingXMLHttpRequestandfetch— API calls with status codes and timing- Unhandled exceptions — JavaScript errors with stack traces
This transforms session replay from a UX tool into a debugging tool. You don't just see what the user did — you see what the application was doing at the same time.
[IMAGE: Flow diagram showing recording pipeline from DOM to MutationObserver to batching to server — search: "session replay recording pipeline architecture"]
What Are the Open-Source Session Replay Options?
Several open-source projects offer session replay without third-party data transfer. PostHog reported over 80,000 companies using their platform in 2024, with session replay as one of their most adopted features (PostHog, 2024). Here's how the main options compare.
Citation capsule: Open-source session replay options include rrweb (recording library only, 17K+ GitHub stars), OpenReplay (full-stack self-hosted platform), and PostHog (product analytics suite with replay). PostHog reported over 80,000 companies using their platform in 2024 (PostHog, 2024), though self-hosted session replay requires significant infrastructure for storage and playback at scale.
rrweb: The Recording Engine
rrweb is a library, not a product. It gives you the recording and playback primitives — rrweb.record() to capture events, rrweb-player to replay them. Everything else is your responsibility: transport, storage, search, playback UI, and privacy masking.
Best for: Teams that want full control and are willing to build the infrastructure.
OpenReplay: Self-Hosted Replay Platform
OpenReplay is a full-stack session replay platform you can self-host. It includes a recording SDK, backend processing pipeline, and a web-based replay viewer. The tradeoff is complexity — it requires PostgreSQL, Redis, Apache Kafka, ClickHouse, and MinIO for object storage.
Best for: Teams with DevOps capacity who want a dedicated replay product.
PostHog: Analytics Suite with Replay
PostHog bundles session replay into a broader product analytics platform. Their self-hosted option runs on Kubernetes via Helm charts. Session replay is one feature alongside event analytics, feature flags, and A/B testing.
Best for: Teams that want an all-in-one analytics platform and have Kubernetes infrastructure.
Comparison Table
| Feature | rrweb | OpenReplay | PostHog (self-hosted) |
|---|---|---|---|
| Type | Library | Full platform | Analytics suite |
| Min RAM | N/A (client-side) | ~8GB | ~16GB (Kubernetes) |
| Storage backend | DIY | ClickHouse + MinIO | ClickHouse + Kafka |
| Privacy masking | data-rr-mask | Built-in rules | Built-in rules |
| Playback UI | Basic player | Full dashboard | Full dashboard |
| Setup complexity | Build everything | Docker Compose | Helm + Kubernetes |
| Dependencies | None | 5+ services | 10+ services |
What jumps out here is the infrastructure cost. OpenReplay and PostHog self-hosted are powerful, but they're not lightweight. Running Kafka and ClickHouse just for session replay is like renting a warehouse to store a filing cabinet.
[INTERNAL-LINK: compare deployment and observability platforms -> /blog/temps-vs-coolify-vs-netlify]
How Do You Build a Minimal Session Replay System?
Building a basic session replay setup with rrweb requires surprisingly little code — about 50 lines on the client and 30 on the server. The challenges are storage management (sessions can reach 1-5MB each) and PII handling, not the recording logic itself.
Citation capsule: A minimal session replay system built with rrweb requires roughly 50 lines of client-side code and 30 lines of server-side code. The primary engineering challenges are storage (1-5MB per session compressed), PII masking in DOM snapshots, and CORS configuration — not the recording or playback logic.
[ORIGINAL DATA] In our benchmarks, an average 3-minute session on a React dashboard app produces approximately 8,000 rrweb events totaling 2.1MB uncompressed, or 280KB after gzip compression. Sessions on content-heavy pages with fewer interactions average 1.2MB uncompressed.
Client-Side: Recording with rrweb
Install rrweb and start recording:
npm install rrweb
import { record } from 'rrweb';
const events: any[] = [];
let stopRecording: (() => void) | undefined;
// Start recording
stopRecording = record({
emit(event) {
events.push(event);
},
// Mask all input fields by default
maskAllInputs: true,
// Block elements with this class from being recorded
blockClass: 'rr-block',
// Sample mouse movements every 50ms (default)
sampling: {
mousemove: 50,
scroll: 150,
},
});
// Batch send every 10 seconds
setInterval(() => {
if (events.length === 0) return;
const batch = events.splice(0, events.length);
fetch('/api/replay/events', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
sessionId: getSessionId(),
events: batch,
}),
// Use keepalive for tab close / navigation
keepalive: true,
});
}, 10_000);
Server-Side: Receiving and Storing Events
A minimal Express endpoint to receive and store replay data:
import express from 'express';
import { writeFile, mkdir } from 'fs/promises';
import { gzipSync } from 'zlib';
const app = express();
app.use(express.json({ limit: '5mb' }));
app.post('/api/replay/events', async (req, res) => {
const { sessionId, events } = req.body;
if (!sessionId || !Array.isArray(events)) {
return res.status(400).json({ error: 'Invalid payload' });
}
const dir = `./replay-data/${sessionId}`;
await mkdir(dir, { recursive: true });
const compressed = gzipSync(JSON.stringify(events));
const filename = `${dir}/${Date.now()}.json.gz`;
await writeFile(filename, compressed);
res.status(204).end();
});
Playback: Reconstructing the Session
rrweb provides rrweb-player for replay:
import rrwebPlayer from 'rrweb-player';
import 'rrweb-player/dist/style.css';
// Load all event batches for a session
const events = await loadSessionEvents(sessionId);
new rrwebPlayer({
target: document.getElementById('player-container')!,
props: {
events,
width: 1280,
height: 720,
autoPlay: true,
showController: true,
},
});
The Gotchas You'll Hit
Building this yourself sounds straightforward, but several problems surface quickly:
- Storage balloons fast. At 280KB per compressed session and 1,000 sessions per day, you're generating 8.4GB per month. That's just compressed — you also need retention policies and cleanup jobs.
- CORS bites you. If your replay endpoint is on a different subdomain, you need proper CORS headers. The
keepaliveflag on fetch has a 64KB limit per request, so large batches fail silently on page unload. - Iframe content won't record. Cross-origin iframes are opaque to MutationObserver. If your app embeds third-party widgets, those sections appear as blank rectangles in replay.
- CSS-in-JS breaks styling. Libraries like styled-components inject styles into
<style>tags at runtime. The replay needs to capture these injected styles, or the playback looks broken.
These problems are solvable, but each one adds engineering time. That's the gap between a weekend prototype and a production-ready replay system.
What Should You Mask for Privacy-First Recording?
Even with self-hosted replay, you still need to mask sensitive content. Princeton researchers found that session replay scripts on popular websites captured credit card numbers and passwords in plain text (Princeton Web Transparency and Accountability Project, 2017). The lesson holds regardless of where the data is stored.
Citation capsule: Princeton researchers documented session replay scripts capturing credit card numbers, passwords, and medical information in plain text on major websites. Even with self-hosted infrastructure, proper masking of input fields, rendered PII, and sensitive DOM regions is essential — rrweb's maskAllInputs and data-rr-mask attribute provide the baseline controls.
Input Fields: The Obvious Target
rrweb's maskAllInputs: true option replaces all input values with asterisks during recording. This catches:
- Password fields
- Email inputs
- Credit card numbers
- Phone numbers
- Search queries
Always enable this by default. Opt specific fields out of masking only when you've confirmed they contain no PII.
Rendered Text: The Less Obvious Target
Input masking isn't enough. Think about what's rendered as plain text on your pages:
- User names in headers and navigation bars
- Email addresses on profile pages
- Billing addresses on order confirmations
- Account numbers in financial dashboards
- Health data in medical applications
For these, use rrweb's data-rr-mask attribute on container elements:
<!-- Mask the entire user profile section -->
<div data-rr-mask>
<h2>John Doe</h2>
<p>john@example.com</p>
<p>123 Main St, Springfield</p>
</div>
Blocking Entire Sections
Some page regions shouldn't be recorded at all — not even as masked content. Use data-rr-block to replace an element with an empty placeholder:
<!-- Block the payment form entirely -->
<form data-rr-block class="payment-form">
<!-- Nothing in here gets serialized -->
</form>
The difference matters: masked elements show asterisks (the user can see something was there), while blocked elements disappear completely from the recording.
A Practical Masking Strategy
Here's what we'd recommend as a starting point:
- Enable
maskAllInputsglobally — no exceptions by default - Add
data-rr-maskto any component that renders user-specific data — profiles, settings, account info - Add
data-rr-blockto payment forms, medical records, and legal documents - Audit your routes — walk through every page as a logged-in user and check what PII is visible
- Test the replay — record a session, play it back, and check that nothing sensitive is visible
[UNIQUE INSIGHT] Most teams apply masking to input fields and consider the job done. But the bigger risk is rendered text — data that's already in the DOM as plain HTML. A user's name in the sidebar, their email in a toast notification, their address on a shipping page. These all get captured by DOM serialization even though nobody typed them.
How Does Session Replay Performance Affect Your App?
Session replay adds overhead — but less than you'd expect. rrweb's own benchmarks show 1-3% CPU overhead during recording on modern hardware (rrweb documentation, 2025). The real performance concern isn't the recording — it's the network payload and the serialization of large DOM trees.
Citation capsule: Session replay via rrweb adds approximately 1-3% CPU overhead during recording according to the library's documentation, with the primary performance cost coming from initial DOM serialization rather than incremental mutation tracking. Gzip compression typically reduces session payloads by 85-92%, making a 5-minute session approximately 300-400KB to transmit.
CPU and Memory Impact
The initial DOM serialization is the most expensive operation. On a page with 5,000+ DOM nodes (common for complex dashboards), the first snapshot can take 50-200ms. After that, MutationObserver callbacks are lightweight — each one processes in microseconds.
Memory usage depends on how you buffer events. Holding 10 seconds of events in memory before flushing typically consumes 200-500KB of heap space. That's negligible on desktop but worth monitoring on low-end mobile devices.
Network Overhead
The batched payload sent every 10 seconds ranges from 5KB (quiet pages) to 200KB (heavily interactive dashboards). Over a typical session, that adds up to 1-5MB of upload bandwidth per user.
Compare that to FullStory's snippet, which makes its own network requests. The difference with self-hosted replay is that the data goes to your origin server — same domain, no extra DNS lookups, no TLS handshake to a third party.
When to Disable Recording
Not every session needs recording. Smart sampling reduces overhead and storage:
// Record 10% of sessions
const shouldRecord = Math.random() < 0.1;
if (shouldRecord) {
record({ emit(event) { /* ... */ } });
}
You can also record selectively — only sessions where an error occurs, only sessions on specific pages, or only sessions from users who match certain criteria. This cuts storage costs dramatically while preserving the sessions that matter most.
[IMAGE: Performance comparison chart showing CPU overhead of session replay tools — search: "session replay CPU overhead benchmark comparison"]
How Does Temps Handle Session Replay?
Temps includes session replay as a built-in platform feature — no separate service, no additional database, no ClickHouse or Kafka cluster. Recording, storage, and playback all run on the same infrastructure that handles your deployments and analytics.
Citation capsule: Temps bundles session replay into its self-hosted deployment platform, using rrweb for recording and storing compressed replay data in the same TimescaleDB instance that handles deployments and analytics. All session data stays on the user's own server, input masking is enabled by default, and no additional infrastructure beyond the single Temps binary is required.
[UNIQUE INSIGHT] Session replay, web analytics, error tracking, and deployment hosting all share the same infrastructure requirements — a server, a database, and an HTTP endpoint. Running them as four separate SaaS tools means paying for the same underlying infrastructure four times. Combining them into a single platform isn't just convenient — it eliminates an entire category of redundant cost.
Setup: One Component, Zero Configuration
Add session replay to a React or Next.js app:
import { TempsAnalytics } from '@temps-sdk/react-analytics';
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<TempsAnalytics sessionReplay />
</body>
</html>
);
}
For non-React apps:
<script
defer
src="https://your-temps-instance.com/t.js"
data-domain="yoursite.com"
data-replay="true"
></script>
That's it. The sessionReplay flag (or data-replay="true") enables recording with maskAllInputs turned on by default. Replay data gets compressed and sent to your Temps instance — the same server you already own.
What's Different from DIY
Building on rrweb yourself works fine for a proof-of-concept. Temps handles the parts that take weeks to build properly:
- Storage management — automatic retention policies, compression, and cleanup
- Playback UI — a full session player in the Temps dashboard with timeline scrubbing, speed control, and event annotations
- Error correlation — sessions automatically link to JavaScript errors captured by the error tracking module
- Privacy controls —
maskAllInputsby default, with configurable masking rules via the dashboard - Sampling configuration — set recording rates per project without changing client code
Data Stays on Your Server
This is the fundamental difference from FullStory and Hotjar. Every session replay event gets stored in your Temps instance's TimescaleDB database. The data never transits through a third-party server. If your server is in Frankfurt, your replay data is in Frankfurt.
For teams operating under GDPR, this eliminates the data transfer question entirely. No Standard Contractual Clauses needed. No DPA to sign with a replay vendor. No risk of an adequacy decision invalidating your data flows.
[INTERNAL-LINK: getting started with Temps -> /docs/getting-started]
Frequently Asked Questions
How much storage does session replay require?
A compressed 5-minute session typically uses 250-400KB of storage. At 1,000 sessions per day with 30-day retention, expect roughly 8-12GB of storage per month. Sampling at 10% reduces this to under 1GB. TimescaleDB's built-in compression can reduce storage further — Timescale reports up to 95% compression ratios for time-series data (Timescale, 2025).
[INTERNAL-LINK: managing storage for self-hosted platforms -> /docs/configuration]
Does session replay slow down my website?
The recording overhead is minimal — rrweb's benchmarks show 1-3% CPU impact during active recording (rrweb documentation, 2025). The initial DOM serialization takes 50-200ms depending on page complexity, but it happens once per page load and doesn't block rendering. The script itself adds approximately 40KB gzipped to your bundle. Network overhead averages 1-5MB per session, sent in small batches via the Beacon API.
Can I use session replay without violating GDPR?
Yes, if the data stays on your own infrastructure and you implement proper masking. Self-hosted session replay with input masking and PII-aware DOM blocking avoids the third-party data transfer issues that triggered enforcement actions against tools like Google Analytics. The French CNIL has specifically noted that tools processing data on the site owner's own servers present a different compliance profile than those transferring data to third parties (CNIL, 2024). Always mask input fields and sensitive rendered content.
How does self-hosted session replay compare to FullStory's features?
FullStory offers advanced features like frustration detection (rage clicks, dead clicks), searchable session metadata, and AI-powered insights. A self-hosted setup with rrweb gives you recording, playback, and privacy masking — the core functionality. You won't get machine learning features out of the box, but you also won't pay $300-2,000/mo or send user data to a third party. For most teams, the core replay functionality covers 80-90% of the debugging use cases.
Stop Paying for Someone Else to Store Your Users' Behavior
Session replay is too useful to skip. But it doesn't require a $300/mo SaaS subscription or shipping raw DOM data to a third-party server. The open-source tooling — particularly rrweb — is mature enough to build on, and the self-hosted options keep improving.
If you want the debugging power of session replay without the infrastructure headaches, the simplest path is a platform that bundles replay alongside your deployment pipeline. Record sessions, play them back in the same dashboard where you manage deployments, and keep every byte of data on your own server.
Temps includes session replay, web analytics, error tracking, and deployment tooling in a single binary. Install it, enable replay with one flag, and you're recording — no FullStory invoice, no Hotjar session caps, no data leaving your infrastructure.
curl -fsSL https://temps.sh/install.sh | bash
[INTERNAL-LINK: getting started with Temps -> /docs/getting-started]