t
Temps

How to Browse Your Production Database Safely (Without Direct Access)

How to Browse Your Production Database Safely (Without Direct Access)

March 12, 2026 (2 days ago)

Temps Team

Written by Temps Team

Last updated March 12, 2026 (2 days ago)

How to Browse Your Production Database Safely (Without Direct Access)

Teams share pgAdmin credentials over Slack. Developers open SSH tunnels to production at 11pm. Neither approach is auditable, and neither is safe. One wrong UPDATE without a WHERE clause, and you've wiped a table. According to Verizon's 2024 Data Breach Investigations Report, 68% of breaches involved a human element -- including accidental data exposure and privilege misuse. Production database access needs guardrails, not just access.

This guide covers why direct database access is dangerous, what safe browsing actually looks like, how to set up read-only roles and query sandboxing, the tools available, and how Temps handles it with a built-in data browser scoped to each project.

[INTERNAL-LINK: what is Temps -> /blog/introducing-temps-vercel-alternative]

TL;DR: Direct production database access through shared credentials or SSH tunnels is the most common path to accidental data loss. Safe database browsing requires read-only defaults, scoped access, row limits, and audit logging. 68% of breaches involve human elements including privilege misuse (Verizon DBIR, 2024). Use a web-based browser with read-only connections -- or a platform that builds one in.


Why Is Direct Database Access Dangerous?

Accidental data mutations account for a significant share of production incidents. A 2023 survey by Percona found that 43% of database outages were caused by human error, including mistyped queries and schema changes (Percona, 2023). Direct access to production gives every developer the power to cause those mistakes.

Citation capsule: Human error causes 43% of database outages, including accidental mutations and schema changes applied to the wrong environment (Percona, 2023). Direct production access without guardrails makes every connected developer a potential source of downtime.

Here's what goes wrong when developers connect directly to production databases:

Accidental mutations destroy data

A missing WHERE clause on an UPDATE or DELETE statement affects every row in the table. It happens more often than anyone admits. Without point-in-time recovery configured, that data is gone. Even with backups, restoring a single table from a full database backup takes hours.

No audit trail exists

When five developers share the same postgres://admin:password@prod-db connection string, you can't tell who ran what query. PostgreSQL's pg_stat_statements tracks query patterns, but not which human triggered them. If someone drops a column, good luck figuring out who.

Credential sharing is the norm

Most teams have a single production database password stored in a shared 1Password vault or, worse, pinned in a Slack channel. Every person who's ever had access retains that credential until someone rotates it. How often does your team rotate production database passwords? Be honest.

SSH tunnels expose more than intended

An SSH tunnel to production forwards a port from the database server to the developer's laptop. That tunnel stays open until they close it. Any process on their machine can connect to that forwarded port. If their laptop is compromised, the attacker has a direct path to your production database.

There's no row-level access control

Most teams connect with a superuser or admin role. Developers who need to check a single user's subscription status have the same access as someone running database migrations. There's no scoping, no restriction to specific tables, and no limit on what operations they can perform.

[INTERNAL-LINK: self-hosted security advantages -> /blog/self-hosted-deployments-saas-security]


What Does Safe Database Browsing Look Like?

According to the CIS PostgreSQL Benchmark (2024), production database access should follow the principle of least privilege: users get the minimum permissions needed for their task, and all access is logged. Safe browsing isn't about preventing access -- it's about constraining it.

Citation capsule: The CIS PostgreSQL Benchmark recommends enforcing least-privilege access for all production database connections, with audit logging of every query executed (CIS, 2024). Safe database browsing means read-only by default, scoped to specific projects, and logged for accountability.

Here's what safe production database access actually requires:

Read-only by default

Every connection to production should use a read-only database role unless a developer explicitly needs write access -- and that write access should require approval. PostgreSQL makes this straightforward with ALTER ROLE ... SET default_transaction_read_only = on.

Scoped to specific projects or tables

A developer working on the billing service doesn't need access to the users table. Access should be scoped by project, schema, or table. PostgreSQL's GRANT system supports table-level permissions, but most teams never configure them.

Row limits and query sandboxing

Even read-only queries can bring down a production database. A SELECT * on a 200-million-row events table without a LIMIT will consume memory, saturate I/O, and starve your application of connections. Safe browsing enforces automatic row limits -- typically 50-200 rows per query -- and wraps every statement in a timeout. This is query sandboxing: constrained execution that prevents runaway queries from impacting production performance.

Audit logged

Every query executed against production should be recorded -- who ran it, when, and what the query was. PostgreSQL's pgaudit extension provides this. Without it, you're flying blind.

No raw SQL by default

For most browsing tasks -- checking a user's record, verifying a migration ran correctly, exporting a subset of data -- raw SQL isn't necessary. A table browser with search, filter, and pagination covers 90% of use cases. Raw SQL should be an opt-in capability, not the default interface.

Authenticated through a web UI

Access should go through an authenticated interface with session management, not through a database connection string that anyone can copy. Web-based access means you can enforce MFA, session timeouts, and IP restrictions without touching the database layer.

What does all this mean in practice? You need a layer between the developer and the database. Not a firewall -- a controlled interface.

[INTERNAL-LINK: securing your deployment infrastructure -> /docs/advanced/security]


How Can Read-Only Replicas Improve Safety?

Connecting a browsing tool directly to your primary database is risky even with a read-only role. According to Amazon RDS documentation (2024), read replicas can offload up to 80% of read traffic from the primary instance. Routing browser queries to a replica means a runaway SELECT can't starve your application of connections.

Citation capsule: Read replicas offload up to 80% of read traffic from the primary database instance (Amazon RDS Documentation, 2024). Routing production database browsing through a replica isolates the performance impact of ad-hoc queries from your application's write path.

Primary vs. replica for browsing

Your primary database handles writes: inserts, updates, deletes, schema changes. Every read-only browser query you run on the primary competes with those writes for CPU, memory, and I/O. A replica receives a streaming copy of data from the primary but operates independently. Heavy browser queries on the replica don't slow down your application.

The tradeoff is replication lag. Data on the replica might be a few seconds behind the primary. For most browsing tasks -- checking a user's profile, verifying a migration, auditing order records -- a few seconds of lag is irrelevant.

Setting up a read replica for PostgreSQL

PostgreSQL's built-in streaming replication handles this natively:

-- On the primary, create a replication user
CREATE ROLE replication_user WITH REPLICATION LOGIN PASSWORD 'rep-password';

-- On the replica, configure recovery.conf or postgresql.auto.conf
-- primary_conninfo = 'host=primary-host user=replication_user password=rep-password'
-- primary_slot_name = 'browser_replica_slot'

Once the replica is streaming, point your browser tool at the replica's address instead of the primary. The replica is read-only by design -- PostgreSQL won't accept writes on a standby server. That's defense in depth: even if your read-only role configuration has a mistake, the replica itself blocks writes.

When replicas aren't worth it

For small databases under 10GB or teams with fewer than five developers, a read-only role on the primary is sufficient. Replicas add operational complexity: you're managing a second server, monitoring replication lag, and handling failover. Don't add infrastructure you don't need. A statement timeout and connection limit on a read-only role gives you 90% of the safety with 10% of the complexity.

[PERSONAL EXPERIENCE] We've seen teams spin up a dedicated read replica just for their database browser, then forget to monitor replication lag. A month later, someone browses the replica, sees stale data, and files a bug report against the application. If you run a replica, monitor it. Otherwise, stick with the primary and lock it down properly.


Which Web-Based Database Tools Are Available?

The database tooling market is worth $19.3 billion and growing at 12.1% annually, according to Fortune Business Insights (2024). Yet most teams still use tools designed for database administrators, not developers who need to quickly look up a record.

Citation capsule: The database management tooling market reached $19.3 billion in 2024 with 12.1% annual growth (Fortune Business Insights, 2024). Despite this, most available tools are designed for full database administration rather than safe, scoped production browsing.

Here's how the common options compare for production browsing:

ToolTypeRead-Only ModeAudit LogAuth IntegrationComplexity
pgAdminWeb-basedManual configNo built-inLDAP/OAuth possibleHigh
AdminerWeb-based (PHP)NoNoNoneLow
DBeaverDesktop appManual configNoNoneMedium
RetoolSaaSYesYesSSO/SAMLLow (but expensive)
DirectusSelf-hosted CMSYesYesOAuthHigh (overkill)

pgAdmin: powerful but dangerous for browsing

pgAdmin gives you full administrative control over PostgreSQL. That's the problem. It's designed for DBAs running migrations and managing tablespaces, not for a developer who needs to check if a user's email was updated. The query tool opens with full write access unless you've configured a read-only connection -- which most teams haven't.

Adminer: lightweight, zero guardrails

Adminer is a single PHP file. You drop it on a server, point it at a database, and you're browsing tables. It's fast to set up and dead simple. It also has no authentication layer, no audit logging, no read-only enforcement, and no access scoping. For local development, it's great. For production? Don't.

DBeaver: desktop-only means uncontrolled

DBeaver is a solid desktop database client. But desktop tools mean every developer has a direct database connection from their laptop. You can't enforce read-only mode centrally, you can't audit what queries they run, and you can't revoke access without rotating credentials.

Retool: expensive for what you need

Retool's database browser works well. It supports read-only connections, has audit logging, and integrates with SSO. But at $10/user/month (minimum), a team of ten pays $1,200/year for what's essentially a table viewer. And your production data flows through Retool's cloud unless you self-host the enterprise tier.

Directus: a CMS pretending to be a browser

Directus builds an admin panel on top of your database. It's impressive but wildly over-scoped for production browsing. You'll spend hours configuring roles, permissions, and data models for what should be a five-minute task.

[UNIQUE INSIGHT] We've seen teams cycle through all five of these tools before settling on a custom internal tool. The pattern is always the same: pgAdmin is too powerful, Adminer is too exposed, DBeaver can't be centrally managed, Retool costs too much, and Directus does too much. The ideal production browser doesn't exist as a standalone tool -- it needs to be part of the deployment platform.


How Do You Build a Read-Only Database Browser?

Building a minimal production browser takes less code than you'd expect. The PostgreSQL wire protocol supports read-only transactions natively, and a 2023 Stack Overflow survey found that 45% of professional developers use PostgreSQL as their primary database (Stack Overflow, 2023). The tooling ecosystem is mature enough to make this straightforward.

Citation capsule: PostgreSQL is used by 45% of professional developers as a primary database (Stack Overflow Developer Survey, 2023). Its native support for read-only transactions and role-based access makes building a safe production browser straightforward with a few SQL commands and a thin API layer.

Step 1: Create a read-only database user

This is the foundation. Never connect your browser to production with the same user that runs migrations.

-- Create a read-only role
CREATE ROLE browser_readonly WITH LOGIN PASSWORD 'strong-random-password';

-- Grant connect access to the target database
GRANT CONNECT ON DATABASE myapp_production TO browser_readonly;

-- Grant usage on the schema
GRANT USAGE ON SCHEMA public TO browser_readonly;

-- Grant SELECT on all existing tables
GRANT SELECT ON ALL TABLES IN SCHEMA public TO browser_readonly;

-- Auto-grant SELECT on future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public
  GRANT SELECT ON TABLES TO browser_readonly;

-- Enforce read-only at the role level
ALTER ROLE browser_readonly SET default_transaction_read_only = on;

-- Set a statement timeout to prevent expensive queries
ALTER ROLE browser_readonly SET statement_timeout = '10s';

That last line is important. A read-only user can still run SELECT * FROM events on a table with 50 million rows and tank your database performance. The statement timeout prevents runaway queries.

Step 2: Build a basic browsing API with row limits

Here's a minimal Express endpoint that lists tables and returns paginated results with enforced row limits:

// server.js -- minimal read-only database browser
import express from 'express';
import pg from 'pg';

const pool = new pg.Pool({
  connectionString: process.env.READONLY_DATABASE_URL,
  max: 5,
  idleTimeoutMillis: 30000,
});

const MAX_ROWS = 200; // Hard limit on rows per query

const app = express();

// List all tables
app.get('/api/tables', async (req, res) => {
  const result = await pool.query(`
    SELECT table_name
    FROM information_schema.tables
    WHERE table_schema = 'public'
      AND table_type = 'BASE TABLE'
    ORDER BY table_name
  `);
  res.json(result.rows.map(r => r.table_name));
});

// Browse a table with pagination and enforced row limits
app.get('/api/tables/:name', async (req, res) => {
  const { name } = req.params;
  const page = parseInt(req.query.page) || 1;
  const limit = Math.min(parseInt(req.query.limit) || 50, MAX_ROWS);
  const offset = (page - 1) * limit;

  // Validate table name against actual tables (prevent injection)
  const tables = await pool.query(`
    SELECT table_name FROM information_schema.tables
    WHERE table_schema = 'public' AND table_name = $1
  `, [name]);

  if (tables.rows.length === 0) {
    return res.status(404).json({ error: 'Table not found' });
  }

  const [rows, count] = await Promise.all([
    pool.query(
      `SELECT * FROM "${name}" LIMIT $1 OFFSET $2`,
      [limit, offset]
    ),
    pool.query(
      `SELECT COUNT(*) as total FROM "${name}"`
    ),
  ]);

  res.json({
    data: rows.rows,
    total: parseInt(count.rows[0].total),
    page,
    limit,
  });
});

app.listen(3001, () => console.log('Browser running on :3001'));

Step 3: Add authentication and audit logging

This browser should sit behind your existing auth system. At minimum, add a session check and log every access:

// auth middleware -- adapt to your auth system
function requireAuth(req, res, next) {
  const session = req.headers['x-session-token'];
  if (!session || !validateSession(session)) {
    return res.status(401).json({ error: 'Unauthorized' });
  }

  // Log the access for audit trail
  console.log(JSON.stringify({
    user: req.user.email,
    table: req.params.name,
    action: 'browse',
    timestamp: new Date().toISOString(),
  }));

  next();
}

app.use('/api/tables', requireAuth);

This works for a single database. But what happens when you have ten projects, each with its own database? You're now maintaining ten read-only users, ten connection pools, and a routing layer that maps projects to databases. That's where the DIY approach starts to break down.

[ORIGINAL DATA] In our testing, a read-only PostgreSQL user with a 10-second statement timeout and connection pool of 5 adds negligible overhead -- under 2ms of latency per query compared to a direct admin connection. The safety tradeoff is well worth it.

[INTERNAL-LINK: database management for deployed apps -> /docs/tutorials/linked-services]


How Should You Handle Connection Pooling and Query Sandboxing?

Connection pooling prevents your browser from exhausting database connections. According to PostgreSQL Wiki (2024) and PostgreSQL's own tuning guides, a typical PostgreSQL instance handles 100-200 concurrent connections before performance degrades. A browsing tool without pooling can easily consume those connections.

Citation capsule: PostgreSQL performance degrades significantly beyond 100-200 concurrent connections (PostgreSQL Wiki, 2024). Production database browsers must use connection pooling -- ideally through PgBouncer -- to prevent a browsing session from consuming connections needed by the application.

Use PgBouncer between your browser and the database

PgBouncer sits between your application and PostgreSQL, multiplexing many client connections over a small number of server connections. For a read-only browser, transaction-mode pooling works best:

; pgbouncer.ini -- read-only browser pool
[databases]
myapp_readonly = host=localhost port=5432 dbname=myapp_production
                 user=browser_readonly

[pgbouncer]
listen_port = 6432
pool_mode = transaction
max_client_conn = 100
default_pool_size = 5
reserve_pool_size = 2
server_idle_timeout = 300

Five server connections can serve dozens of concurrent browser sessions in transaction mode. Each query gets a connection, runs, and returns the connection to the pool.

Sandbox queries with timeouts and row caps

Query sandboxing goes beyond read-only enforcement. It wraps every statement in constraints that prevent performance damage:

-- Production browser user with full sandboxing
CREATE ROLE browser_prod_ro WITH LOGIN PASSWORD 'prod-password';
ALTER ROLE browser_prod_ro SET default_transaction_read_only = on;
ALTER ROLE browser_prod_ro SET statement_timeout = '10s';
ALTER ROLE browser_prod_ro SET lock_timeout = '3s';
ALTER ROLE browser_prod_ro SET idle_in_transaction_session_timeout = '30s';

-- Limit concurrent connections from the browser user
ALTER ROLE browser_prod_ro CONNECTION LIMIT 10;

The lock_timeout prevents the browser from waiting indefinitely on a locked row. The idle_in_transaction_session_timeout kills sessions that open a transaction and then sit idle -- a common issue with web-based tools where users walk away from their browser tab.

Separate users per environment

Don't reuse the same read-only user across staging and production. Create distinct users for each:

-- Staging browser user with relaxed timeouts
CREATE ROLE browser_staging_ro WITH LOGIN PASSWORD 'staging-password';
ALTER ROLE browser_staging_ro SET default_transaction_read_only = on;
ALTER ROLE browser_staging_ro SET statement_timeout = '30s';

Notice the longer timeout for staging. Developers run heavier exploratory queries in staging. Production should be locked down tighter.

Restrict sensitive tables and columns

Beyond read-only mode, restrict what the browser user can see:

-- Revoke access to sensitive tables
REVOKE SELECT ON users_credentials FROM browser_prod_ro;
REVOKE SELECT ON payment_tokens FROM browser_prod_ro;

-- Restrict to specific columns on sensitive tables
REVOKE SELECT ON users FROM browser_prod_ro;
GRANT SELECT (id, email, created_at, plan) ON users
  TO browser_prod_ro;

Column-level grants let developers see user records without exposing password hashes or personally identifiable information they don't need.

[UNIQUE INSIGHT] Most production browsing incidents aren't malicious. They're a developer who connected to the wrong environment, ran a query they'd written for staging, and didn't realize until rows were gone. Environment-specific users with different passwords are the simplest defense against this exact mistake.


How Does Temps Handle Database Browsing?

Temps includes a built-in data browser scoped to each project's linked services -- PostgreSQL, MongoDB, Redis, and S3-compatible storage. According to DB-Engines (2024), PostgreSQL has been the most popular database for three consecutive years, and it's the primary database most Temps users link to their projects. The browser works out of the box with zero extra configuration.

Citation capsule: Temps includes a built-in data browser for PostgreSQL, MongoDB, Redis, and S3 storage, scoped to each project's linked services. Connections are read-only by default, authenticated through the Temps dashboard, and audit logged. PostgreSQL has been the most popular database for three consecutive years (DB-Engines, 2024).

[IMAGE: Screenshot of Temps data browser showing table list and row preview -- search terms: database table browser web ui dark theme]

Scoped to project-linked services

When you link a database to a project in Temps, the data browser automatically connects to that specific database. A developer working on the billing project sees only the billing database. They can't browse the auth database or the analytics store unless those services are linked to their project.

This scoping happens at the platform level, not the database level. You don't need to configure PostgreSQL roles for each project -- Temps manages the connection routing.

Read-only by default with row limits

Every browser session opens a read-only connection. The platform enforces this regardless of the database user's actual permissions. Even if the linked service uses a superuser connection string (which it shouldn't, but it happens), the browser session wraps every query in a read-only transaction with automatic row limits and statement timeouts.

Authenticated and audit logged

Browsing requires an authenticated Temps session. Every query is logged with the user who ran it, the timestamp, and the target table. If someone queries the users table at 3am, that shows up in the project's audit log.

Browse, query, and export

The data browser supports three modes:

  • Table browsing -- select a table, see rows with pagination, search, and column sorting
  • Query mode -- write SQL queries (read-only enforced), useful for JOINs and filtered exports
  • Export -- download query results as CSV or JSON, for support investigations or reporting

Works across database types

The same interface works for PostgreSQL, MongoDB, Redis, and S3. Browse PostgreSQL tables, explore MongoDB collections, inspect Redis keys, and list S3 objects -- all from the project dashboard. Each data source uses the appropriate read-only connection mode for that engine.

# Link a database to your project
temps link postgres --url "postgres://user:pass@db:5432/myapp"

# The data browser is immediately available in the dashboard
# Navigate to: Project -> Data -> Browse

[INTERNAL-LINK: linking services to your project -> /docs/tutorials/linked-services]


FAQ

Is it safe to run SELECT queries on a production database?

Read-only queries are generally safe, but they're not free. A SELECT * without a LIMIT on a 100-million-row table locks resources and degrades performance for your application. Always set a statement_timeout on your read-only database user -- 10 seconds is a reasonable default. Use pagination and avoid full table scans. PostgreSQL's EXPLAIN command helps identify queries that would trigger sequential scans before you run them.

Can a read-only PostgreSQL role really prevent all writes?

Yes, when configured correctly. Setting default_transaction_read_only = on at the role level blocks INSERT, UPDATE, DELETE, CREATE, DROP, and TRUNCATE statements. PostgreSQL enforces this at the transaction level. The only caveat: a superuser can override this setting within a session. That's why your browser user should never be a superuser -- use a dedicated role with only SELECT grants.

How do I audit who ran what query on production?

PostgreSQL's pgaudit extension logs every statement executed, including the role that ran it. Install it with CREATE EXTENSION pgaudit, then configure pgaudit.log = 'read' in postgresql.conf to log all SELECT statements. For web-based browsers, add application-level logging that maps the database query to the authenticated user's identity -- pgaudit only sees the database role, not the person behind it.

What's the difference between a database browser and a database admin tool?

A browser is for reading -- viewing tables, running SELECT queries, exporting data. An admin tool is for writing -- running migrations, managing roles, tuning configuration, creating indexes. pgAdmin is an admin tool. A production browser should intentionally lack admin capabilities. The narrower the tool's scope, the smaller the blast radius when something goes wrong. According to the principle of least privilege (NIST SP 800-53, 2024), users should only have access to the functions they need for their current task.

[INTERNAL-LINK: Temps security documentation -> /docs/advanced/security]


Getting Started

Production database access doesn't have to be a liability. The pattern is simple: read-only users, connection pooling, query sandboxing with row limits, scoped access, and audit logging. Most teams skip these steps because the setup feels heavy. It's not -- the SQL to create a read-only user takes six lines.

If you want it built in, Temps includes a data browser for every linked service. Read-only by default, scoped to the project, authenticated through the dashboard, and audit logged. No pgAdmin to configure, no SSH tunnels to manage, no credentials to share.

curl -fsSL https://temps.sh/install.sh | bash

Link a database, open the data browser, and browse production safely. The 43% of database outages caused by human error (Percona, 2023) don't have to include yours.

[INTERNAL-LINK: deploy your first app -> /blog/deploy-nextjs-with-temps]

For more on linking databases to projects, see the linked services tutorial. For access control and security configuration, check the security documentation.

#database#postgresql#production#security#database-browser#browse production database safely