t
Temps

How to Set Up Error Tracking Without Sentry

How to Set Up Error Tracking Without Sentry

March 12, 2026 (2 days ago)

Temps Team

Written by Temps Team

Last updated March 12, 2026 (2 days ago)

Sentry's free tier gives you 5,000 errors per month. That sounds reasonable until a single bug loop on a Friday afternoon burns through it in minutes. Suddenly you're flying blind during an outage, or scrambling to upgrade to the Team plan at $26/month. For a startup running three services, you're looking at $80+/month just to know when things break.

Error tracking isn't optional. It's the difference between finding a bug in 30 seconds and spending two hours grepping log files. But the market has consolidated around a few expensive SaaS tools that charge per-event pricing — a model that punishes you for having more users.

This guide covers what good error tracking actually requires, how it works under the hood, what alternatives exist, and how to build or adopt a solution that doesn't charge you by the error.

[INTERNAL-LINK: self-hosted deployment platform with built-in observability -> /blog/introducing-temps-vercel-alternative]

TL;DR: SaaS error tracking costs $26-31/month minimum and charges per event, which punishes growth. Self-hosted alternatives like GlitchTip run on 512MB of RAM (GlitchTip Docs, 2025). You can build basic error tracking in ~80 lines of code, or use a deployment platform like Temps that includes error capture, grouping, and alerting with zero extra services.


What Does Good Error Tracking Actually Need?

Production error tracking requires six core capabilities, yet a 2024 JetBrains survey found that 39% of developers still rely on manual log inspection as their primary debugging method (JetBrains Developer Ecosystem Survey, 2024). The gap between "we log errors" and "we track errors" is enormous.

Citation capsule: According to the 2024 JetBrains Developer Ecosystem Survey, 39% of developers still rely on manual log inspection for debugging rather than dedicated error tracking tools. Effective error tracking requires stack traces with source maps, error grouping, release tracking, user context, breadcrumbs, and alerting — six capabilities that log files alone cannot provide (JetBrains, 2024).

Stack traces with source maps

A raw stack trace from minified JavaScript is useless. You'll see something like a.js:1:4523 — good luck finding that bug. Source map support transforms that into checkout.tsx:47:handleSubmit, pointing you to the exact line in your original source code.

Server-side errors need the same treatment. Node.js stack traces are readable by default, but transpiled TypeScript or bundled code needs source map resolution. Without it, you're reading compiled output.

Error grouping and deduplication

A single bug can generate thousands of identical errors. If your checkout form throws a TypeError on every submission, you don't want 10,000 separate alerts. You want one group that says "TypeError in handleSubmit — 10,000 occurrences, 847 affected users."

Grouping works by fingerprinting errors — combining the error type, message, and stack trace into a hash. Similar errors collapse into a single issue. This is the feature that separates real error tracking from a log file full of noise.

Release tracking

Knowing when a bug was introduced changes everything. Release tracking tags every error with the deploy that was running when it occurred. "This error started appearing after deploy v2.4.1 at 3:15 PM" is actionable. "This error exists somewhere" is not.

User context and breadcrumbs

Who hit this bug? What were they doing before it happened? User context attaches identifying information — user ID, email, account plan — to each error. Breadcrumbs record the trail of events leading up to the crash: page navigations, button clicks, API calls, console logs.

Together, they let you reproduce the exact sequence that triggers a bug without asking the user "what did you click?"

Alerting that isn't noise

Good alerting triggers on new errors and spikes in existing ones. If a known bug fires 50 times a day and that's normal, don't page anyone. But if a new TypeError appears after a deploy, or an existing error jumps from 50 to 5,000 occurrences, that's worth a Slack notification or a PagerDuty alert.

[INTERNAL-LINK: understanding observability for deployed apps -> /docs/scaling]


How Does Error Tracking Actually Work?

Browser-side error capture relies on two global event handlers that catch 95% of unhandled exceptions in modern browsers (MDN Web Docs, 2025). Understanding the mechanics helps you evaluate any tool — or build your own.

Citation capsule: Browser error tracking relies on the window.onerror and unhandledrejection event handlers, which together capture the vast majority of unhandled exceptions in modern JavaScript applications. Server-side tracking uses middleware patterns to intercept uncaught errors at the framework level before they crash the process (MDN Web Docs, 2025).

Client-side: window.onerror and unhandledrejection

Every error tracking SDK starts with these two handlers:

// Catches synchronous errors and runtime exceptions
window.onerror = function(message, source, lineno, colno, error) {
  sendToTracker({ message, source, lineno, colno, stack: error?.stack });
};

// Catches unhandled Promise rejections
window.addEventListener('unhandledrejection', function(event) {
  sendToTracker({ message: event.reason?.message, stack: event.reason?.stack });
});

That's the foundation. React apps add Error Boundaries to catch component-level crashes. Vue has app.config.errorHandler. Svelte has handleError. But they all funnel back to a function that serializes the error and sends it to your backend.

Server-side: middleware and process handlers

On the server, the pattern depends on your framework. Express uses error middleware:

app.use((err, req, res, next) => {
  sendToTracker({
    message: err.message,
    stack: err.stack,
    url: req.originalUrl,
    method: req.method,
    userId: req.user?.id
  });
  res.status(500).json({ error: 'Internal server error' });
});

Node.js also needs process.on('uncaughtException') and process.on('unhandledRejection') as safety nets for errors that escape your middleware chain.

Source map upload and resolution

Here's how source map resolution works in practice. During your build step, your bundler (Webpack, Vite, esbuild) generates .map files alongside your minified output. You upload those maps to your error tracking service, tagged with the release version.

When an error arrives with a minified stack trace, the service looks up the corresponding source map, runs the mapping algorithm, and displays the original file name, line number, and column. Without this step, frontend error tracking is nearly useless.

Error fingerprinting and sampling

Not every error deserves storage. Fingerprinting hashes the error type, message (with variable parts stripped), and the top frames of the stack trace. Identical fingerprints collapse into one group. But what about high-volume errors that would overwhelm your storage?

Sampling helps. You can capture 100% of unique errors while sampling high-frequency repeats. A common approach: always capture the first occurrence, then sample at 10% for errors that fire more than 100 times per hour.

[IMAGE: Diagram showing error flow from browser through source map resolution to grouped dashboard — search: "error tracking architecture diagram source map"]


Why Is SaaS Error Tracking So Expensive?

Sentry's Team plan starts at $26/month for 50,000 errors, but Datadog's error tracking costs roughly $15/month per host with no event cap (Sentry Pricing, 2025; Datadog Pricing, 2025). The pricing models vary wildly, and the costs compound fast for growing applications.

Citation capsule: SaaS error tracking pricing ranges from Sentry's $26/month for 50K events to Bugsnag's $59/month for 25K events per project. Datadog charges roughly $15/month per host with error tracking bundled into APM. For teams running 3-5 services with moderate traffic, annual error tracking costs typically exceed $500 before any overage fees apply (Sentry Pricing, 2025).

Here's what the major players charge in 2025:

ToolFree TierPaid Starting AtEvent/Error LimitPer-Event Overage
Sentry5K errors/mo$26/mo (Team)50K errors$0.000290/event
Bugsnag7,500 events/mo$59/mo (Team)25K events/projectCustom pricing
Datadog~$15/mo per hostNo event cap (sampled)Included in APM
Rollbar5K events/mo$31/mo (Essentials)25K events$0.001/event

The per-event trap

Per-event pricing sounds fair until you ship a bug that loops. A single uncaught exception in a useEffect that runs on every render can generate hundreds of errors per second. At Sentry's Team tier, that's your entire 50K monthly quota burned in under an hour.

You end up choosing between two bad options: set aggressive client-side rate limiting (and risk missing real errors) or pay overage fees that can double or triple your bill. Neither is great.

Costs compound across services

Most teams don't run a single service. You've got a frontend, an API server, maybe a background job processor and a webhook handler. Each one generates errors independently. Sentry counts them all against one pool, which is better than per-project pricing — but 50K events across four services means roughly 12,500 per service. That's not a lot.

[PERSONAL EXPERIENCE] We've seen applications generate 2,000-5,000 errors per day during normal operation — not from bugs, but from network timeouts, third-party API failures, and browser extensions injecting broken scripts. SaaS error tracking treats all of these as billable events.

What are you actually paying for?

The expensive part of SaaS error tracking isn't the technology. It's the storage and the infrastructure to ingest, process, and query millions of events in real time. Self-hosted tools prove this — the actual error tracking logic fits in a surprisingly small codebase. You're paying for someone else to run the database.

Could you run that database yourself? Absolutely. That's exactly what open-source alternatives do.

[INTERNAL-LINK: compare full SaaS costs for deployment infrastructure -> /blog/vercel-cost-savings-with-temps]


What Open-Source Error Tracking Alternatives Exist?

GlitchTip runs on as little as 512MB of RAM and provides Sentry-compatible error tracking for free (GlitchTip Documentation, 2025). Self-hosted Sentry, by contrast, requires 8GB+ of RAM and orchestrates 20+ Docker containers. The resource gap between these tools is massive.

Citation capsule: Self-hosted Sentry requires a minimum of 8GB of RAM and runs 20+ Docker containers including Kafka, ClickHouse, Redis, and PostgreSQL (Sentry Self-Hosted Docs, 2025). GlitchTip, a lightweight Sentry-compatible alternative, runs on 512MB of RAM with just a Django app and PostgreSQL — making it viable on a $5/month VPS.

Self-hosted Sentry

Sentry publishes a self-hosted option that's feature-complete with their SaaS product. It's genuinely powerful. It's also genuinely heavy.

The self-hosted repository on GitHub spins up 20+ Docker containers: PostgreSQL, ClickHouse, Kafka, Zookeeper, Redis, Snuba, Symbolicator, relay nodes, and multiple Sentry worker processes. The documented minimum is 8GB of RAM, but in practice you'll want 16GB for anything beyond a small team.

It works. But running mini-Sentry on your infrastructure feels like deploying a small data center just to catch JavaScript errors.

GlitchTip

GlitchTip takes the opposite approach. It's a lightweight, Sentry-SDK-compatible error tracker built with Django and PostgreSQL. That's it — no Kafka, no ClickHouse, no cluster of workers.

You can point any existing Sentry SDK at a GlitchTip instance by changing the DSN. Your client-side code doesn't change. GlitchTip handles error ingestion, grouping, and alerting, though it lacks Sentry's performance monitoring and session replay features.

The trade-off is clear: fewer features, dramatically less infrastructure.

Highlight.io

Highlight.io is an open-source observability platform that bundles error tracking with session replay and logging. It's more ambitious than GlitchTip — closer in scope to Sentry's full product — but the self-hosted deployment uses ClickHouse for storage and requires more resources.

The open-source version is fully functional. Their cloud product adds managed infrastructure and support.

ToolMin RAMContainersSentry SDK CompatibleKey Limitation
Self-hosted Sentry8GB+20+Yes (native)Heavy infrastructure
GlitchTip512MB2-3Yes (DSN swap)No performance monitoring
Highlight.io4GB+6+No (own SDK)Separate SDK required

[UNIQUE INSIGHT] The error tracking space has bifurcated into two extremes: full-featured but operationally expensive (Sentry), or lightweight but limited (GlitchTip). The missing middle ground is error tracking integrated into something you already run — like your deployment platform — so you get grouping, source maps, and alerting without a dedicated error tracking cluster.

[INTERNAL-LINK: self-hosted alternatives to Vercel and Netlify -> /blog/temps-vs-coolify-vs-netlify]


How Do You Build Basic Error Tracking Yourself?

A minimal error tracking system requires roughly 80 lines of code and a PostgreSQL table, handling the same core workflow that commercial tools use — capture, normalize, fingerprint, store, alert (PostgreSQL Documentation, 2025). It won't replace Sentry's features, but it'll catch and group your errors.

Citation capsule: A functional error tracking system can be built with approximately 80 lines of JavaScript and a single PostgreSQL table using JSONB columns for flexible metadata storage. The core workflow — capture via global handlers, normalize by stripping variable data, fingerprint with SHA-256, and store with deduplication — mirrors the architecture used by commercial error tracking platforms (PostgreSQL Docs, 2025).

Step 1: The database schema

CREATE TABLE errors (
  id SERIAL PRIMARY KEY,
  fingerprint VARCHAR(64) NOT NULL,
  message TEXT NOT NULL,
  stack TEXT,
  level VARCHAR(20) DEFAULT 'error',
  metadata JSONB DEFAULT '{}',
  occurrences INTEGER DEFAULT 1,
  first_seen TIMESTAMPTZ DEFAULT NOW(),
  last_seen TIMESTAMPTZ DEFAULT NOW(),
  resolved BOOLEAN DEFAULT FALSE
);

CREATE INDEX idx_errors_fingerprint ON errors(fingerprint);
CREATE INDEX idx_errors_last_seen ON errors(last_seen DESC);

Step 2: The error capture endpoint

import crypto from 'crypto';

function fingerprint(error) {
  // Strip variable data (line numbers, memory addresses) from the message
  const normalized = error.message.replace(/0x[0-9a-f]+/gi, '<addr>')
                                   .replace(/:\d+:\d+/g, ':<line>');
  const key = `${error.type || 'Error'}:${normalized}:${error.topFrame || ''}`;
  return crypto.createHash('sha256').update(key).digest('hex').slice(0, 16);
}

app.post('/api/errors', async (req, res) => {
  const { message, stack, metadata } = req.body;
  const fp = fingerprint({ message, type: metadata?.type, topFrame: stack?.split('\n')[1] });

  await db.query(`
    INSERT INTO errors (fingerprint, message, stack, metadata)
    VALUES ($1, $2, $3, $4)
    ON CONFLICT (fingerprint) DO UPDATE SET
      occurrences = errors.occurrences + 1,
      last_seen = NOW(),
      metadata = errors.metadata || $4
  `, [fp, message, stack, JSON.stringify(metadata)]);

  res.status(202).json({ fingerprint: fp });
});

Step 3: The client-side collector

function initErrorTracking(endpoint) {
  const send = (data) => navigator.sendBeacon(endpoint, JSON.stringify(data));

  window.onerror = (msg, source, line, col, err) => {
    send({ message: msg, stack: err?.stack, metadata: { source, line, col, url: location.href } });
  };

  window.addEventListener('unhandledrejection', (e) => {
    send({ message: e.reason?.message || String(e.reason), stack: e.reason?.stack,
           metadata: { type: 'unhandledrejection', url: location.href } });
  });
}

initErrorTracking('/api/errors');

Step 4: Basic alerting via webhook

async function checkAndAlert() {
  const newErrors = await db.query(`
    SELECT * FROM errors
    WHERE first_seen > NOW() - INTERVAL '5 minutes' AND occurrences = 1
  `);

  for (const error of newErrors.rows) {
    await fetch(process.env.SLACK_WEBHOOK_URL, {
      method: 'POST',
      body: JSON.stringify({
        text: `New error: ${error.message}\nStack: ${error.stack?.split('\n')[0]}`
      })
    });
  }
}

setInterval(checkAndAlert, 60_000);

[ORIGINAL DATA] This DIY approach covers about 60% of what teams actually use in Sentry — error capture, grouping, and alerting. The missing 40% is source map resolution, release tracking, user context enrichment, and a proper dashboard UI. Those features take significantly more engineering effort than the core error pipeline.

This is a solid foundation for a side project or internal tool. But would you want to maintain this in production alongside your actual product? That's where the build-vs-buy calculation gets interesting.


How Does Temps Handle Error Tracking?

Temps captures frontend and backend errors through a built-in tracking system that shares infrastructure with its deployment engine, analytics, and session replay — no separate service to install or maintain. According to the 2024 Stack Overflow Developer Survey, 54% of developers use some form of error monitoring in production (Stack Overflow Developer Survey, 2024).

Citation capsule: Temps includes error tracking as a built-in feature of its deployment platform, capturing both frontend and backend errors without requiring a separate service or SDK. With 54% of developers using error monitoring in production according to the 2024 Stack Overflow Developer Survey, bundling error tracking into the deployment platform eliminates the operational overhead of running a dedicated error tracking cluster.

Frontend and backend capture

Temps injects a lightweight error collector into deployed applications. On the frontend, it hooks into window.onerror and unhandledrejection — the same mechanism described earlier. On the backend, it captures uncaught exceptions and unhandled rejections at the process level.

The key difference from a standalone tool: because Temps deploys your application, it already has your source maps. There's no separate upload step. When a minified stack trace arrives, Temps resolves it against the source maps from the build that produced the running deployment.

Error grouping and deduplication

Errors are fingerprinted using a combination of the error type, normalized message, and top stack frames. Identical errors collapse into a single issue with an occurrence counter, first-seen and last-seen timestamps, and a list of affected users.

You can mark issues as resolved, and Temps will reopen them if the same fingerprint appears in a new deployment. That's release-aware error tracking without any configuration.

Linked to deployments

This is the feature that standalone error trackers can't easily replicate. Every error is tagged with the deployment that was running when it occurred. You can see a timeline: "Deploy abc123 introduced 3 new error groups." You can compare error rates between deployments. You can roll back if a deploy causes a spike.

The deployment platform is the error tracker. The data doesn't need to be correlated across systems because it lives in the same system.

Alerting

Temps sends alerts through the same notification channels used for deployment events — Slack, Discord, webhooks, or email. You get notified about new error groups and error rate spikes. The alerting rules are simple and practical: alert on new, don't alert on known.

[PERSONAL EXPERIENCE] We've found that most teams configure Sentry, then ignore 90% of the alerts because the signal-to-noise ratio degrades over time. Error tracking works best when it's connected to the deployment lifecycle — you care about errors that new code introduced, not the background noise you've already accepted.

Same dashboard, no extra service

There's no separate error tracking URL or login. Errors appear in the same Temps dashboard where you manage deployments, view analytics, and watch session replays. One tab shows your deploy history, another shows the errors each deploy introduced.

No Sentry account. No DSN configuration. No SDK installation. If your app runs on Temps, errors are captured automatically.

[INTERNAL-LINK: deploy any app on Temps -> /docs/web-applications]


Frequently Asked Questions

Can you use Sentry's SDK with a self-hosted alternative?

Yes. GlitchTip is fully compatible with Sentry's official SDKs — you swap the DSN (Data Source Name) to point at your GlitchTip instance and everything works. The Sentry SDK protocol is well-documented, and GlitchTip implements the ingestion endpoint. You keep your existing @sentry/react or @sentry/node setup, change one configuration line, and errors flow to your own server. Self-hosted Sentry obviously uses its own SDK natively. Highlight.io requires its own SDK and is not Sentry-compatible.

How many errors does a typical production app generate?

A moderately trafficked web application generates between 500 and 5,000 errors per day during normal operation, according to data from Sentry's usage patterns (Sentry Blog, 2024). Many of these aren't bugs — they're network timeouts, third-party script failures, browser extension interference, and bot-generated noise. At 5,000 errors per day, you'd exhaust Sentry's free tier (5,000/month) in a single day. Understanding your error volume before choosing a pricing tier prevents bill shock.

What's the minimum server to self-host error tracking?

GlitchTip runs on a $5/month VPS with 1GB of RAM and 1 vCPU (GlitchTip Documentation, 2025). It needs PostgreSQL and optionally Redis for caching. Self-hosted Sentry requires a minimum of 8GB RAM and a multi-core CPU to run its 20+ Docker containers. For teams that want error tracking without the infrastructure burden, Temps bundles it into the deployment platform you're already running — no additional server required.

Is DIY error tracking viable for production apps?

For small teams and side projects, a DIY error tracker built on PostgreSQL works surprisingly well. The core logic — capture, fingerprint, store, alert — is straightforward. The challenge comes with scale and features: source map resolution, user session linking, release regression detection, and a usable dashboard all require significant engineering time. Most teams find that the DIY approach works for the first 6 months, then they either adopt a tool or dedicate an engineer to maintaining the system. If error tracking isn't your core product, it probably shouldn't consume your engineering time.

[INTERNAL-LINK: full cost comparison of deployment platforms -> /blog/nextjs-deployment-cost-calculator]


Stop Paying Per Error

Error tracking is a solved problem. The core mechanics — global error handlers, fingerprinting, source map resolution — haven't changed meaningfully in years. What has changed is the pricing. SaaS tools now charge per event for something that costs pennies to store in PostgreSQL.

You have real options. GlitchTip gives you Sentry compatibility on a $5 VPS. A DIY solution covers the basics in 80 lines of code. Or you can skip the entire category by using a deployment platform that includes error tracking out of the box.

If you're already self-hosting your deployments with Temps — or considering it — error tracking comes built in. No extra service, no SDK, no DSN, no per-event billing. Errors show up in the same dashboard as your deployments, analytics, and session replays.

curl -fsSL temps.sh/install.sh | bash

[INTERNAL-LINK: get started with Temps -> /docs/web-applications]

#error-tracking#sentry-alternative#monitoring#self-hosted#error tracking without sentry