March 12, 2026 (1mo ago)
Written by Temps Team
Last updated March 12, 2026 (1mo ago)
Sentry's free tier gives you 5,000 errors per month. That sounds reasonable until a single bug loop on a Friday afternoon burns through it in minutes. Suddenly you're flying blind during an outage, or scrambling to upgrade to the Team plan at $26/month. For a startup running three services, you're looking at $80+/month just to know when things break.
Error tracking isn't optional. It's the difference between finding a bug in 30 seconds and spending two hours grepping log files. But the market has consolidated around a few expensive SaaS tools that charge per-event pricing — a model that punishes you for having more users.
This guide covers what good error tracking actually requires, how it works under the hood, what alternatives exist, and how to build or adopt a solution that doesn't charge you by the error.
TL;DR: SaaS error tracking costs $26-31/month minimum and charges per event, which punishes growth. Self-hosted alternatives like GlitchTip run on 512MB of RAM. You can build basic error tracking in ~80 lines of code, or use a deployment platform like Temps that includes error capture, grouping, and alerting with zero extra services.
Production error tracking requires six core capabilities, yet the JetBrains Developer Ecosystem Survey found that 39% of developers still rely on manual log inspection as their primary debugging method. The gap between "we log errors" and "we track errors" is enormous.
A raw stack trace from minified JavaScript is useless. You'll see something like a.js:1:4523 — good luck finding that bug. Source map support transforms that into checkout.tsx:47:handleSubmit, pointing you to the exact line in your original source code.
Server-side errors need the same treatment. Node.js stack traces are readable by default, but transpiled TypeScript or bundled code needs source map resolution. Without it, you're reading compiled output.
A single bug can generate thousands of identical errors. If your checkout form throws a TypeError on every submission, you don't want 10,000 separate alerts. You want one group that says "TypeError in handleSubmit — 10,000 occurrences, 847 affected users."
Grouping works by fingerprinting errors — combining the error type, message, and stack trace into a hash. Similar errors collapse into a single issue. This is the feature that separates real error tracking from a log file full of noise.
Knowing when a bug was introduced changes everything. Release tracking tags every error with the deploy that was running when it occurred. "This error started appearing after deploy v2.4.1 at 3:15 PM" is actionable. "This error exists somewhere" is not.
Who hit this bug? What were they doing before it happened? User context attaches identifying information — user ID, email, account plan — to each error. Breadcrumbs record the trail of events leading up to the crash: page navigations, button clicks, API calls, console logs.
Together, they let you reproduce the exact sequence that triggers a bug without asking the user "what did you click?"
Good alerting triggers on new errors and spikes in existing ones. If a known bug fires 50 times a day and that's normal, don't page anyone. But if a new TypeError appears after a deploy, or an existing error jumps from 50 to 5,000 occurrences, that's worth a Slack notification or a PagerDuty alert.
Browser-side error capture relies on two global event handlers that catch 95% of unhandled exceptions in modern browsers. Understanding the mechanics helps you evaluate any tool — or build your own.
Every error tracking SDK starts with these two handlers:
// Catches synchronous errors and runtime exceptions
window.onerror = function(message, source, lineno, colno, error) {
sendToTracker({ message, source, lineno, colno, stack: error?.stack });
};
// Catches unhandled Promise rejections
window.addEventListener('unhandledrejection', function(event) {
sendToTracker({ message: event.reason?.message, stack: event.reason?.stack });
});
That's the foundation. React apps add Error Boundaries to catch component-level crashes. Vue has app.config.errorHandler. Svelte has handleError. But they all funnel back to a function that serializes the error and sends it to your backend.
On the server, the pattern depends on your framework. Express uses error middleware:
app.use((err, req, res, next) => {
sendToTracker({
message: err.message,
stack: err.stack,
url: req.originalUrl,
method: req.method,
userId: req.user?.id
});
res.status(500).json({ error: 'Internal server error' });
});
Node.js also needs process.on('uncaughtException') and process.on('unhandledRejection') as safety nets for errors that escape your middleware chain.
Here's how source map resolution works in practice. During your build step, your bundler (Webpack, Vite, esbuild) generates .map files alongside your minified output. You upload those maps to your error tracking service, tagged with the release version.
When an error arrives with a minified stack trace, the service looks up the corresponding source map, runs the mapping algorithm, and displays the original file name, line number, and column. Without this step, frontend error tracking is nearly useless.
Not every error deserves storage. Fingerprinting hashes the error type, message (with variable parts stripped), and the top frames of the stack trace. Identical fingerprints collapse into one group. But what about high-volume errors that would overwhelm your storage?
Sampling helps. You can capture 100% of unique errors while sampling high-frequency repeats. A common approach: always capture the first occurrence, then sample at 10% for errors that fire more than 100 times per hour.
[IMAGE: Diagram showing error flow from browser through source map resolution to grouped dashboard — search: "error tracking architecture diagram source map"]
According to Sentry's pricing page, the Team plan starts at $26/month for 50,000 errors, while Datadog's pricing puts error tracking at roughly $15/month per host with no event cap. The pricing models vary wildly, and the costs compound fast for growing applications.
Here's what the major players charge in 2025:
| Tool | Free Tier | Paid Starting At | Event/Error Limit | Per-Event Overage |
|---|---|---|---|---|
| Sentry | 5K errors/mo | $26/mo (Team) | 50K errors | $0.000290/event |
| Bugsnag | 7,500 events/mo | $59/mo (Team) | 25K events/project | Custom pricing |
| Datadog | — | ~$15/mo per host | No event cap (sampled) | Included in APM |
| Rollbar | 5K events/mo | $31/mo (Essentials) | 25K events | $0.001/event |
Per-event pricing sounds fair until you ship a bug that loops. A single uncaught exception in a useEffect that runs on every render can generate hundreds of errors per second. At Sentry's Team tier, that's your entire 50K monthly quota burned in under an hour.
You end up choosing between two bad options: set aggressive client-side rate limiting (and risk missing real errors) or pay overage fees that can double or triple your bill. Neither is great.
Most teams don't run a single service. You've got a frontend, an API server, maybe a background job processor and a webhook handler. Each one generates errors independently. Sentry counts them all against one pool, which is better than per-project pricing — but 50K events across four services means roughly 12,500 per service. That's not a lot.
We've seen applications generate 2,000-5,000 errors per day during normal operation — not from bugs, but from network timeouts, third-party API failures, and browser extensions injecting broken scripts. SaaS error tracking treats all of these as billable events.
The expensive part of SaaS error tracking isn't the technology. It's the storage and the infrastructure to ingest, process, and query millions of events in real time. Self-hosted tools prove this — the actual error tracking logic fits in a surprisingly small codebase. You're paying for someone else to run the database.
Could you run that database yourself? Absolutely. That's exactly what open-source alternatives do.
According to the GlitchTip documentation, it runs on as little as 512MB of RAM and provides Sentry-compatible error tracking for free. Self-hosted Sentry, by contrast, requires 8GB+ of RAM and orchestrates 20+ Docker containers. The resource gap between these tools is massive.
Sentry publishes a self-hosted option that's feature-complete with their SaaS product. It's genuinely powerful. It's also genuinely heavy.
The self-hosted repository on GitHub spins up 20+ Docker containers: PostgreSQL, ClickHouse, Kafka, Zookeeper, Redis, Snuba, Symbolicator, relay nodes, and multiple Sentry worker processes. The documented minimum is 8GB of RAM, but in practice you'll want 16GB for anything beyond a small team.
It works. But running mini-Sentry on your infrastructure feels like deploying a small data center just to catch JavaScript errors.
GlitchTip takes the opposite approach. It's a lightweight, Sentry-SDK-compatible error tracker built with Django and PostgreSQL. That's it — no Kafka, no ClickHouse, no cluster of workers.
You can point any existing Sentry SDK at a GlitchTip instance by changing the DSN. Your client-side code doesn't change. GlitchTip handles error ingestion, grouping, and alerting, though it lacks Sentry's performance monitoring and session replay features.
The trade-off is clear: fewer features, dramatically less infrastructure.
Highlight.io is an open-source observability platform that bundles error tracking with session replay and logging. It's more ambitious than GlitchTip — closer in scope to Sentry's full product — but the self-hosted deployment uses ClickHouse for storage and requires more resources.
The open-source version is fully functional. Their cloud product adds managed infrastructure and support.
| Tool | Min RAM | Containers | Sentry SDK Compatible | Key Limitation |
|---|---|---|---|---|
| Self-hosted Sentry | 8GB+ | 20+ | Yes (native) | Heavy infrastructure |
| GlitchTip | 512MB | 2-3 | Yes (DSN swap) | No performance monitoring |
| Highlight.io | 4GB+ | 6+ | No (own SDK) | Separate SDK required |
The error tracking space has bifurcated into two extremes: full-featured but operationally expensive (Sentry), or lightweight but limited (GlitchTip). The missing middle ground is error tracking integrated into something you already run — like your deployment platform — so you get grouping, source maps, and alerting without a dedicated error tracking cluster.
A minimal error tracking system requires roughly 80 lines of code and a PostgreSQL table, handling the same core workflow that commercial tools use — capture, normalize, fingerprint, store, alert. It won't replace Sentry's features, but it'll catch and group your errors.
CREATE TABLE errors (
id SERIAL PRIMARY KEY,
fingerprint VARCHAR(64) NOT NULL,
message TEXT NOT NULL,
stack TEXT,
level VARCHAR(20) DEFAULT 'error',
metadata JSONB DEFAULT '{}',
occurrences INTEGER DEFAULT 1,
first_seen TIMESTAMPTZ DEFAULT NOW(),
last_seen TIMESTAMPTZ DEFAULT NOW(),
resolved BOOLEAN DEFAULT FALSE
);
CREATE INDEX idx_errors_fingerprint ON errors(fingerprint);
CREATE INDEX idx_errors_last_seen ON errors(last_seen DESC);
import crypto from 'crypto';
function fingerprint(error) {
// Strip variable data (line numbers, memory addresses) from the message
const normalized = error.message.replace(/0x[0-9a-f]+/gi, '<addr>')
.replace(/:\d+:\d+/g, ':<line>');
const key = `${error.type || 'Error'}:${normalized}:${error.topFrame || ''}`;
return crypto.createHash('sha256').update(key).digest('hex').slice(0, 16);
}
app.post('/api/errors', async (req, res) => {
const { message, stack, metadata } = req.body;
const fp = fingerprint({ message, type: metadata?.type, topFrame: stack?.split('\n')[1] });
await db.query(`
INSERT INTO errors (fingerprint, message, stack, metadata)
VALUES ($1, $2, $3, $4)
ON CONFLICT (fingerprint) DO UPDATE SET
occurrences = errors.occurrences + 1,
last_seen = NOW(),
metadata = errors.metadata || $4
`, [fp, message, stack, JSON.stringify(metadata)]);
res.status(202).json({ fingerprint: fp });
});
function initErrorTracking(endpoint) {
const send = (data) => navigator.sendBeacon(endpoint, JSON.stringify(data));
window.onerror = (msg, source, line, col, err) => {
send({ message: msg, stack: err?.stack, metadata: { source, line, col, url: location.href } });
};
window.addEventListener('unhandledrejection', (e) => {
send({ message: e.reason?.message || String(e.reason), stack: e.reason?.stack,
metadata: { type: 'unhandledrejection', url: location.href } });
});
}
initErrorTracking('/api/errors');
async function checkAndAlert() {
const newErrors = await db.query(`
SELECT * FROM errors
WHERE first_seen > NOW() - INTERVAL '5 minutes' AND occurrences = 1
`);
for (const error of newErrors.rows) {
await fetch(process.env.SLACK_WEBHOOK_URL, {
method: 'POST',
body: JSON.stringify({
text: `New error: ${error.message}\nStack: ${error.stack?.split('\n')[0]}`
})
});
}
}
setInterval(checkAndAlert, 60_000);
This DIY approach covers about 60% of what teams actually use in Sentry — error capture, grouping, and alerting. The missing 40% is source map resolution, release tracking, user context enrichment, and a proper dashboard UI. Those features take significantly more engineering effort than the core error pipeline.
This is a solid foundation for a side project or internal tool. But would you want to maintain this in production alongside your actual product? That's where the build-vs-buy calculation gets interesting.
Temps captures frontend and backend errors through a built-in tracking system that shares infrastructure with its deployment engine, analytics, and session replay — no separate service to install or maintain. According to the Stack Overflow Developer Survey, 54% of developers use some form of error monitoring in production.
Temps injects a lightweight error collector into deployed applications. On the frontend, it hooks into window.onerror and unhandledrejection — the same mechanism described earlier. On the backend, it captures uncaught exceptions and unhandled rejections at the process level.
The key difference from a standalone tool: because Temps deploys your application, it already has your source maps. There's no separate upload step. When a minified stack trace arrives, Temps resolves it against the source maps from the build that produced the running deployment.
Errors are fingerprinted using a combination of the error type, normalized message, and top stack frames. Identical errors collapse into a single issue with an occurrence counter, first-seen and last-seen timestamps, and a list of affected users.
You can mark issues as resolved, and Temps will reopen them if the same fingerprint appears in a new deployment. That's release-aware error tracking without any configuration.
This is the feature that standalone error trackers can't easily replicate. Every error is tagged with the deployment that was running when it occurred. You can see a timeline: "Deploy abc123 introduced 3 new error groups." You can compare error rates between deployments. You can roll back if a deploy causes a spike.
The deployment platform is the error tracker. The data doesn't need to be correlated across systems because it lives in the same system.
Temps sends alerts through the same notification channels used for deployment events — Slack, Discord, webhooks, or email. You get notified about new error groups and error rate spikes. The alerting rules are simple and practical: alert on new, don't alert on known.
We've found that most teams configure Sentry, then ignore 90% of the alerts because the signal-to-noise ratio degrades over time. Error tracking works best when it's connected to the deployment lifecycle — you care about errors that new code introduced, not the background noise you've already accepted.
There's no separate error tracking URL or login. Errors appear in the same Temps dashboard where you manage deployments, view analytics, and watch session replays. One tab shows your deploy history, another shows the errors each deploy introduced.
No Sentry account. No DSN configuration. No SDK installation. If your app runs on Temps, errors are captured automatically.
Related: For a side-by-side comparison of 8 error tracking tools with pricing, see 8 Best Sentry Alternatives for Error Tracking. For the full observability picture, check How to Set Up OpenTelemetry Tracing.
Yes. GlitchTip is fully compatible with Sentry's official SDKs — you swap the DSN (Data Source Name) to point at your GlitchTip instance and everything works. The Sentry SDK protocol is well-documented, and GlitchTip implements the ingestion endpoint. You keep your existing @sentry/react or @sentry/node setup, change one configuration line, and errors flow to your own server. Self-hosted Sentry obviously uses its own SDK natively. Highlight.io requires its own SDK and is not Sentry-compatible.
A moderately trafficked web application generates between 500 and 5,000 errors per day during normal operation, according to data from Sentry's usage patterns. Many of these aren't bugs — they're network timeouts, third-party script failures, browser extension interference, and bot-generated noise. At 5,000 errors per day, you'd exhaust Sentry's free tier (5,000/month) in a single day. Understanding your error volume before choosing a pricing tier prevents bill shock.
According to the GlitchTip documentation, it runs on a $5/month VPS with 1GB of RAM and 1 vCPU. It needs PostgreSQL and optionally Redis for caching. Self-hosted Sentry requires a minimum of 8GB RAM and a multi-core CPU to run its 20+ Docker containers. For teams that want error tracking without the infrastructure burden, Temps bundles it into the deployment platform you're already running — no additional server required.
For small teams and side projects, a DIY error tracker built on PostgreSQL works surprisingly well. The core logic — capture, fingerprint, store, alert — is straightforward. The challenge comes with scale and features: source map resolution, user session linking, release regression detection, and a usable dashboard all require significant engineering time. Most teams find that the DIY approach works for the first 6 months, then they either adopt a tool or dedicate an engineer to maintaining the system. If error tracking isn't your core product, it probably shouldn't consume your engineering time.
Error tracking is a solved problem. The core mechanics — global error handlers, fingerprinting, source map resolution — haven't changed meaningfully in years. What has changed is the pricing. SaaS tools now charge per event for something that costs pennies to store in PostgreSQL.
You have real options. GlitchTip gives you Sentry compatibility on a $5 VPS. A DIY solution covers the basics in 80 lines of code. Or you can skip the entire category by using a deployment platform that includes error tracking out of the box.
If you're already self-hosting your deployments with Temps — or considering it — error tracking comes built in. No extra service, no SDK, no DSN, no per-event billing. Errors show up in the same dashboard as your deployments, analytics, and session replays.
curl -fsSL temps.sh/install.sh | bash