How to Add Audit Logging to Your Deployment Platform
How to Add Audit Logging to Your Deployment Platform
March 12, 2026 (2 days ago)
Written by Temps Team
Last updated March 12, 2026 (2 days ago)
How to Add Audit Logging to Your Deployment Platform
Someone on your team deleted a production environment variable last Thursday. You know this because the app crashed. What you don't know is who did it, when exactly it happened, or whether it was intentional.
According to Verizon's 2024 Data Breach Investigations Report (2024), 68% of breaches involved a human element — errors, misuse, or social engineering. Audit logs are the only reliable way to reconstruct what happened after the fact. Without them, you're guessing.
This guide covers how audit logging works from the ground up: event capture, schema design, immutable storage, retention policies, and querying patterns. We'll also look at how Temps implements audit logging for every deployment and configuration event automatically.
[INTERNAL-LINK: what is Temps -> /blog/introducing-temps-vercel-alternative]
TL;DR: Audit logs record every significant action in your deployment platform — who did what, when, and from where. They're essential for compliance, debugging, and security investigations. 68% of breaches involve a human element (Verizon DBIR, 2024), and audit trails are often the only way to trace what went wrong.
Why Do Audit Logs Matter for Deployment Platforms?
Audit logs serve three distinct purposes: compliance, debugging, and security forensics. A 2024 survey by Drata found that 87% of companies now face at least one compliance framework requiring audit trails (Drata, 2024). Deployment platforms sit at the center of your infrastructure, making them high-value targets for all three.
Citation capsule: Deployment platforms manage secrets, infrastructure access, and production code — making them prime targets for insider threats and misconfigurations. With 87% of organizations subject to compliance frameworks requiring audit trails (Drata, 2024), audit logging in deployment tools isn't optional.
Compliance and regulatory requirements
SOC 2, HIPAA, GDPR, and ISO 27001 all require organizations to maintain records of system access and changes. Your deployment platform controls who can push code to production, modify environment variables, and change DNS records. Auditors want to see exactly who performed those actions and when.
Without audit logs, you'll spend weeks reconstructing evidence from scattered server logs, git histories, and Slack messages during your next compliance audit. With them, you run a query and hand over the results.
Debugging production incidents
Most production incidents aren't caused by bad code alone. They're caused by configuration changes: someone rotated an API key but forgot to update staging, or a teammate changed a build setting that broke the deploy pipeline.
When your app goes down at 2am, the first question isn't "what's broken?" — it's "what changed?" Audit logs answer that question in seconds.
Security forensics and insider threat detection
If an attacker compromises a team member's account, audit logs reveal every action that account took. Which projects did they access? Did they export environment variables? Did they modify deployment targets?
[PERSONAL EXPERIENCE] In practice, the most common security-relevant events we've seen teams investigate aren't external attacks — they're former employees whose access wasn't revoked promptly, or shared credentials used by someone who shouldn't have had them.
[INTERNAL-LINK: securing your deployment platform -> /blog/self-hosted-deployments-saas-security]
What Should You Log in a Deployment Platform?
The rule is straightforward: log every state change, never log secrets. According to OWASP's Logging Cheat Sheet (2024), applications should log all input validation failures, authentication successes and failures, authorization failures, and application errors — at minimum.
Citation capsule: OWASP recommends logging all authentication events, authorization failures, input validation failures, and application errors (OWASP Logging Cheat Sheet, 2024). For deployment platforms, this extends to every deployment, environment change, secret rotation, and domain configuration event.
Events you must capture
Here's a practical list for a deployment platform:
| Category | Events |
|---|---|
| Authentication | Login success, login failure, token creation, token revocation, session expiry |
| Deployments | Deploy triggered, build started, build completed, deploy promoted, deploy rolled back |
| Configuration | Env var created, env var updated, env var deleted, build settings changed |
| Access control | User invited, role changed, user removed, permission granted |
| Infrastructure | Domain added, DNS configured, SSL certificate issued, node joined cluster |
| Databases | Backup created, backup restored, database credentials rotated |
Every write operation (create, update, delete) should generate an audit event. Read operations generally don't need logging unless you're tracking access to sensitive resources like environment variables or database credentials.
What you must never log
This is equally important. Your audit logs should never contain:
- Plaintext secrets — environment variable values, API keys, database passwords
- Full request bodies with user data that falls under GDPR or HIPAA
- Authentication tokens — session cookies, JWT values, OAuth tokens
- Personal data beyond what's necessary for identification (email and user ID are fine; addresses are not)
Log that an environment variable was updated. Log its key name. Never log its value. The difference between a useful audit trail and a security liability comes down to this distinction.
[IMAGE: Diagram showing what to log vs what to redact in audit events — audit log security best practices]
How Do You Design an Audit Event Schema?
A good audit event schema answers five questions: who, what, when, where, and how. NIST SP 800-92 on Log Management (2006) recommends that every log entry contain a timestamp, source, event type, and identity of the actor — at minimum.
Citation capsule: NIST SP 800-92 recommends that audit events include at minimum: a precise timestamp, event source, event type classification, and actor identity (NIST). A well-designed schema makes events queryable, parseable by automated tools, and useful during incident investigations months later.
The core event structure
Here's a JSON schema for an audit event that covers the essentials:
{
"id": "evt_a1b2c3d4",
"timestamp": "2026-03-12T14:32:07.123Z",
"actor": {
"user_id": 42,
"email": "dev@example.com",
"ip_address": "198.51.100.23",
"user_agent": "Mozilla/5.0..."
},
"operation": "environment_variable.updated",
"resource": {
"type": "environment_variable",
"id": "var_789",
"project_id": "proj_456",
"name": "DATABASE_URL"
},
"metadata": {
"previous_value_hash": "sha256:a1b2c3...",
"source": "dashboard",
"request_id": "req_x7y8z9"
}
}
Notice that the resource field includes the variable name but not its value. The metadata includes a hash of the previous value so you can verify whether it actually changed, without storing the secret itself.
Operation type naming conventions
Use a consistent resource.action format for operation types. This makes filtering and grouping trivial:
deployment.created
deployment.promoted
deployment.rolled_back
environment_variable.created
environment_variable.updated
environment_variable.deleted
user.invited
user.role_changed
domain.added
domain.verified
[ORIGINAL DATA] Avoid past tense or inconsistent naming. We've seen teams use a mix of created, create, was_created, and new for the same action across different services. Six months later, querying becomes a nightmare. Pick one convention and enforce it at the schema level.
PostgreSQL table design
Here's a production-ready audit log table:
CREATE TABLE audit_logs (
id SERIAL PRIMARY KEY,
user_id INTEGER NOT NULL REFERENCES users(id),
operation_type VARCHAR(100) NOT NULL,
ip_address_id INTEGER REFERENCES ip_geolocations(id),
user_agent TEXT NOT NULL DEFAULT '',
data JSONB NOT NULL DEFAULT '{}',
audit_date TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Index for filtering by user
CREATE INDEX idx_audit_logs_user_id ON audit_logs(user_id);
-- Index for filtering by operation type
CREATE INDEX idx_audit_logs_operation_type ON audit_logs(operation_type);
-- Index for time-range queries
CREATE INDEX idx_audit_logs_audit_date ON audit_logs(audit_date DESC);
-- Composite index for the most common query pattern
CREATE INDEX idx_audit_logs_user_date
ON audit_logs(user_id, audit_date DESC);
A few design decisions worth noting. The data column uses JSONB rather than a fixed schema. Different event types carry different metadata — a deployment event includes build duration and commit hash, while a user invitation event includes the invitee's email and assigned role. JSONB lets you store structured metadata without adding columns for every event type.
The ip_address_id references a separate geolocation table rather than storing the raw IP inline. This normalizes geolocation data and helps with analytics queries later.
[INTERNAL-LINK: PostgreSQL backup strategies -> /blog/how-to-back-up-postgresql-in-docker-automatically]
How Do You Make Audit Logs Immutable?
Immutability is the single most important property of an audit log. If someone can modify or delete audit records, the entire trail becomes worthless for compliance and forensics. The CIS Controls v8 (2021) explicitly recommend that log data be protected from unauthorized modification and deletion.
Citation capsule: CIS Controls v8 recommends protecting audit log data from unauthorized modification and deletion to ensure forensic integrity (CIS, 2021). In PostgreSQL, this means revoking UPDATE and DELETE permissions on the audit table and using append-only access patterns.
Database-level protections
The simplest approach: create a dedicated database role for audit writes that only has INSERT permission.
-- Create a role that can only insert
CREATE ROLE audit_writer;
GRANT INSERT ON audit_logs TO audit_writer;
GRANT USAGE ON SEQUENCE audit_logs_id_seq TO audit_writer;
-- Explicitly deny update and delete
REVOKE UPDATE, DELETE ON audit_logs FROM audit_writer;
-- Your application connects with this role for audit writes
Your application's main database role should have SELECT access for reading audit logs, but write operations should go through the restricted audit_writer role. Even if an attacker gains access to the application database credentials, they can't tamper with existing records.
Application-level safeguards
Beyond database permissions, your application code should enforce immutability:
// Audit log creation -- notice there's no update or delete method
#[async_trait]
pub trait AuditLogger: Send + Sync {
async fn create_audit_log(
&self,
operation: &dyn AuditOperation
) -> Result<()>;
// No update_audit_log method exists
// No delete_audit_log method exists
}
This is exactly how Temps structures its AuditLogger trait. The interface literally doesn't expose methods for modifying or deleting records. You can't accidentally write code that mutates audit history because the API doesn't allow it.
Preventing tampering with triggers
For an extra layer of protection, add a PostgreSQL trigger that blocks updates and deletes entirely:
CREATE OR REPLACE FUNCTION prevent_audit_modification()
RETURNS TRIGGER AS $$
BEGIN
RAISE EXCEPTION
'Audit logs are immutable. Cannot % record id=%',
TG_OP, OLD.id;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER audit_logs_immutable
BEFORE UPDATE OR DELETE ON audit_logs
FOR EACH ROW
EXECUTE FUNCTION prevent_audit_modification();
Even a superuser running a manual SQL statement will see an explicit error. This makes accidental deletions nearly impossible and intentional tampering obvious.
But what about the data column? Could someone tamper with the JSONB payload in transit before it's written? That's where structured types help. Define your audit operations as strongly-typed Rust structs (or TypeScript interfaces) and serialize them at write time. No raw JSON construction.
How Should You Handle Retention and Archival?
Audit logs grow fast. A mid-size team running 50 deploys per day can generate 500-1,000 audit events daily when you include configuration changes, logins, and access events. The SANS Institute (2014) recommends retaining audit logs for a minimum of one year, with many compliance frameworks requiring three to seven years.
Citation capsule: SANS Institute recommends retaining audit logs for a minimum of one year, though compliance frameworks like SOX and HIPAA may require three to seven years of retention (SANS). Time-based partitioning in PostgreSQL keeps query performance stable as tables grow into millions of rows.
Time-based partitioning
PostgreSQL's native partitioning handles this cleanly. Partition by month so you can drop old partitions without vacuuming:
-- Convert to partitioned table
CREATE TABLE audit_logs_partitioned (
LIKE audit_logs INCLUDING ALL
) PARTITION BY RANGE (audit_date);
-- Create monthly partitions
CREATE TABLE audit_logs_2026_01
PARTITION OF audit_logs_partitioned
FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
CREATE TABLE audit_logs_2026_02
PARTITION OF audit_logs_partitioned
FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');
CREATE TABLE audit_logs_2026_03
PARTITION OF audit_logs_partitioned
FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');
When a partition exceeds your retention window, you archive it to cold storage (S3, object storage) and then drop it. This is far more efficient than running DELETE on millions of rows.
Tiered retention strategy
Not all audit events deserve the same retention period. Consider a tiered approach:
- Hot tier (0-90 days): All events in PostgreSQL, fully indexed, fast queries
- Warm tier (90 days - 1 year): Archived to compressed files on object storage, queryable with effort
- Cold tier (1-7 years): Compressed archives for compliance, rarely accessed
[UNIQUE INSIGHT] Most teams over-retain in the hot tier and under-retain in the cold tier. You don't need three years of login events in your primary database. But you absolutely need them somewhere for your SOC 2 auditor. The cost difference between PostgreSQL storage and S3 is roughly 10x — plan accordingly.
[INTERNAL-LINK: database management in Docker -> /blog/how-to-back-up-postgresql-in-docker-automatically]
What Are the Best Patterns for Querying Audit Logs?
Audit logs are write-heavy and read-rarely — until something goes wrong. Then you need fast, flexible queries across potentially millions of rows. According to Datadog's State of DevOps Report (2024), teams with searchable audit trails resolve security incidents 60% faster than those relying on unstructured logs.
Citation capsule: Teams with searchable, structured audit trails resolve security incidents 60% faster than those using unstructured logging approaches (Datadog, 2024). Effective audit log queries combine time-range filtering with operation type and actor filters for fast incident reconstruction.
Common query patterns
Here are the queries you'll run most often during incident investigations:
"What changed in the last hour?" — the first question during any outage:
SELECT
al.audit_date,
u.email,
al.operation_type,
al.data
FROM audit_logs al
JOIN users u ON u.id = al.user_id
WHERE al.audit_date > NOW() - INTERVAL '1 hour'
ORDER BY al.audit_date DESC;
"What did this user do?" — after a compromised account is identified:
SELECT
al.audit_date,
al.operation_type,
al.data,
ig.ip_address,
ig.country
FROM audit_logs al
LEFT JOIN ip_geolocations ig ON ig.id = al.ip_address_id
WHERE al.user_id = 42
ORDER BY al.audit_date DESC
LIMIT 100;
"Who touched this project's env vars?" — narrowing down a configuration issue:
SELECT
al.audit_date,
u.email,
al.operation_type,
al.data->>'key_name' AS variable_name
FROM audit_logs al
JOIN users u ON u.id = al.user_id
WHERE al.operation_type LIKE 'environment_variable.%'
AND al.data->>'project_id' = '456'
ORDER BY al.audit_date DESC;
JSONB indexing for the data column
If you're querying the data column frequently, add a GIN index:
CREATE INDEX idx_audit_logs_data
ON audit_logs USING GIN (data);
This makes JSONB containment queries fast:
-- Find all events for a specific project
SELECT * FROM audit_logs
WHERE data @> '{"project_id": "456"}';
Be cautious with GIN indexes on high-write tables, though. They add overhead to every insert. If your write volume is high, consider using targeted expression indexes instead:
-- Index only the project_id field within data
CREATE INDEX idx_audit_logs_project_id
ON audit_logs ((data->>'project_id'));
CLI access for quick investigations
Command-line access to audit logs is invaluable during incidents. Here's what that looks like in practice:
# List recent audit events
temps audit list --limit 20
# Filter by operation type
temps audit list --type "deployment.created" --limit 50
# View details of a specific event
temps audit show --id evt_a1b2c3d4
Having audit data accessible via CLI means your on-call engineer doesn't need to open a database client or navigate a dashboard during a 2am incident. They can query directly from their terminal.
[IMAGE: Terminal screenshot showing audit log query results in a deployment platform — CLI audit log viewer]
How Does Temps Handle Audit Logging?
Temps includes audit logging out of the box for every write operation across the platform. There's no setup required, no third-party integration, and no additional cost. Every deployment, configuration change, user action, and infrastructure event generates a structured audit record automatically.
Citation capsule: Temps automatically generates immutable audit records for every write operation — deployments, environment variable changes, user management, and infrastructure events — stored in PostgreSQL with structured JSONB metadata and geolocation tracking, requiring zero configuration.
Built-in event capture
Every handler that performs a write operation in Temps follows the same pattern: execute the business logic, then create an audit record. Here's a simplified version of how it works internally:
// After a successful configuration update
let audit = ConfigUpdatedAudit {
context: AuditContext {
user_id: auth.user_id(),
ip_address: Some(metadata.ip_address.clone()),
user_agent: metadata.user_agent.clone(),
},
setting_key: key.clone(),
previous_value_hash: hash_value(&old_value),
};
// Audit failure never blocks the main operation
if let Err(e) = app_state
.audit_service
.create_audit_log(&audit)
.await
{
error!("Failed to create audit log: {}", e);
}
[ORIGINAL DATA] Notice the pattern: audit log failures are logged but never fail the primary operation. This is a deliberate design choice. If your audit system goes down, you don't want deployments to stop. The error gets captured in application logs, and the team gets notified — but the deploy still goes through.
Strongly-typed audit operations
Every event type implements the AuditOperation trait, which enforces structured data at compile time:
pub trait AuditOperation: Send + Sync {
fn operation_type(&self) -> String;
fn user_id(&self) -> i32;
fn ip_address(&self) -> Option<String>;
fn user_agent(&self) -> &str;
fn serialize(&self) -> Result<String>;
}
This means you can't accidentally create a malformed audit event. The Rust compiler catches missing fields, wrong types, and serialization issues before the code even runs. Compare that to a system where audit events are constructed as raw JSON strings — typos in field names, missing timestamps, and schema drift are inevitable.
Geolocation enrichment
Temps enriches audit events with IP geolocation data automatically. The ip_address_id foreign key in the audit log table points to a geolocation record with country, city, and coordinates. During a security investigation, you can immediately spot anomalies: "Why is there a login from a country where we have no employees?"
Querying via CLI and API
Temps exposes audit logs through both its REST API and CLI. The data is queryable by time range, operation type, user, and project. Since everything is stored in PostgreSQL with proper indexes, even large audit tables (millions of rows) return results in milliseconds for common query patterns.
[INTERNAL-LINK: getting started with Temps -> /docs/getting-started]
FAQ
How much storage do audit logs consume?
A single audit event in PostgreSQL typically uses 500 bytes to 2KB depending on the JSONB metadata. At 1,000 events per day, that's roughly 0.5-2MB daily, or 180-730MB per year.
Partitioning by month and archiving old partitions to object storage keeps database size manageable. S3 costs drop to roughly $0.023/GB/month for archived data. Most deployment platforms won't exceed a few gigabytes per year in their hot tier.
Can audit logs impact application performance?
Audit log writes add minimal latency — typically 1-5ms per INSERT on PostgreSQL with proper indexing. The key is making audit writes asynchronous or fire-and-forget. Temps handles this by logging audit failures without blocking the primary operation.
For high-throughput scenarios, batching audit events and writing them in bulk every few seconds reduces per-request overhead further. According to PostgreSQL's official benchmarks, a single instance handles thousands of inserts per second comfortably.
What's the difference between audit logs and application logs?
Application logs capture technical events: errors, warnings, debug info, request timings. They're optimized for debugging code issues and typically use unstructured or semi-structured formats.
Audit logs capture business events: who did what, when, to which resource. They're optimized for compliance, security forensics, and change tracking. They use strict schemas and are immutable. You need both — application logs serve developers, while audit logs serve security teams, compliance auditors, and incident investigators.
Do I need audit logging if I'm a solo developer?
Yes. Even without compliance requirements, audit logs help you answer "what changed?" when something breaks. They're especially valuable for deployment platforms because so many incidents trace back to configuration changes rather than code bugs. Future you — debugging a production issue at midnight — will thank past you for setting up audit trails. And if your project grows to a team, you'll already have the infrastructure in place.