Managed Services

Temps provides fully managed database and storage services that you can provision and link to your projects in seconds. Unlike platforms like Vercel that require external services, Temps includes everything you need.


Available Services

Temps offers three types of managed services that integrate seamlessly with your projects:

  • PostgreSQL — Production-ready relational database
  • Redis — High-performance caching and pub/sub
  • S3 Storage — Object storage compatible with MinIO and AWS S3

All services are provisioned on your infrastructure, giving you complete control over your data.

Quick provision via CLI

# Create a PostgreSQL database
temps service create postgres my-db \
  --version 16 \
  --storage 10GB

# Create a Redis instance
temps service create redis my-cache \
  --version 7 \
  --memory 512MB

# Create S3 storage
temps service create s3 my-storage \
  --size 50GB

PostgreSQL Database

Creating a PostgreSQL Instance

Provision a production-ready PostgreSQL database with automatic backups and high availability:

Via Dashboard:

  1. Navigate to ServicesCreate Service
  2. Select PostgreSQL
  3. Choose version (13, 14, 15, or 16)
  4. Configure storage size and instance resources
  5. Click Create Database

Via CLI:

temps service create postgres production-db \
  --version 16 \
  --storage 20GB \
  --memory 2GB \
  --replicas 2

Configuration Options:

OptionDescriptionDefault
versionPostgreSQL version (13-16)16
storageDisk space allocation10GB
memoryRAM allocation1GB
replicasNumber of read replicas1
backupsAutomatic daily backupsEnabled

Connecting to PostgreSQL

Once created, Temps generates a connection string you can use in your applications:

Node.js/TypeScript

import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  ssl: {
    rejectUnauthorized: true
  }
});

// Query example
const result = await pool.query('SELECT * FROM users WHERE id = $1', [userId]);

Python

import psycopg2

conn = psycopg2.connect(
    os.environ['DATABASE_URL'],
    sslmode='require'
)

cursor = conn.cursor()
cursor.execute('SELECT * FROM users WHERE id = %s', (user_id,))

Linking to Projects

Link your PostgreSQL database to one or more projects:

Link database to project

# Link database to project
temps service link production-db --project my-app

# Link to specific environment
temps service link production-db \
  --project my-app \
  --environment production

Database Migrations

Run migrations during deployment by configuring build hooks:

temps.json

{
  "build": {
    "command": "npm run build",
    "beforeDeploy": [
      "npm run db:migrate"
    ]
  }
}

Redis Caching

Creating a Redis Instance

Provision a high-performance Redis instance for caching, sessions, or pub/sub:

Via Dashboard:

  1. Navigate to ServicesCreate Service
  2. Select Redis
  3. Choose version (6 or 7)
  4. Configure memory allocation
  5. Click Create Cache

Via CLI:

temps service create redis session-store \
  --version 7 \
  --memory 1GB \
  --persistence enabled

Configuration Options:

OptionDescriptionDefault
versionRedis version (6-7)7
memoryRAM allocation512MB
persistenceEnable RDB/AOFDisabled
evictionEviction policyallkeys-lru
maxclientsMax connections10000

Connecting to Redis

Node.js with ioredis

import Redis from 'ioredis';

const redis = new Redis(process.env.REDIS_URL);

// Set a value
await redis.set('session:123', JSON.stringify(sessionData), 'EX', 3600);

// Get a value
const data = await redis.get('session:123');

Python with redis-py

import redis
import os

r = redis.from_url(os.environ['REDIS_URL'])

# Set with expiration
r.setex('session:123', 3600, json.dumps(session_data))

# Get value
data = r.get('session:123')

Common Use Cases

  • Name
    Session Storage
    Description

    Store user sessions with automatic expiration:

    await redis.set(`session:${userId}`, sessionData, 'EX', 86400); // 24 hours
    
  • Name
    Rate Limiting
    Description

    Implement rate limiting with Redis counters:

    const count = await redis.incr(`rate:${ipAddress}`);
    if (count === 1) await redis.expire(`rate:${ipAddress}`, 60); // 60 seconds
    if (count > 100) throw new Error('Rate limit exceeded');
    
  • Name
    Caching
    Description

    Cache expensive queries or API responses:

    const cached = await redis.get(`cache:${key}`);
    if (cached) return JSON.parse(cached);
    
    const data = await fetchExpensiveData();
    await redis.set(`cache:${key}`, JSON.stringify(data), 'EX', 300); // 5 minutes
    
  • Name
    Pub/Sub
    Description

    Real-time messaging between services:

    // Publisher
    await redis.publish('notifications', JSON.stringify(message));
    
    // Subscriber
    redis.subscribe('notifications', (message) => {
      console.log('Received:', JSON.parse(message));
    });
    

S3 Object Storage

Creating S3 Storage

Provision MinIO-compatible object storage for files, backups, and assets:

Via Dashboard:

  1. Navigate to ServicesCreate Service
  2. Select S3 Storage
  3. Configure storage size
  4. Set bucket policy (private/public)
  5. Click Create Storage

Via CLI:

temps service create s3 media-storage \
  --size 100GB \
  --public-read false

Configuration Options:

OptionDescriptionDefault
sizeStorage allocation50GB
public-readPublic read accessfalse
versioningObject versioningDisabled
lifecycleAuto-delete rulesNone

Connecting to S3

Temps S3 is compatible with AWS S3 SDKs:

Node.js with AWS SDK v3

import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: process.env.S3_ENDPOINT,
  region: 'us-east-1',
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY,
  },
});

// Upload a file
await s3.send(new PutObjectCommand({
  Bucket: 'media-storage',
  Key: 'uploads/image.jpg',
  Body: fileBuffer,
  ContentType: 'image/jpeg',
}));

// Download a file
const response = await s3.send(new GetObjectCommand({
  Bucket: 'media-storage',
  Key: 'uploads/image.jpg',
}));

Python with boto3

import boto3
import os

s3 = boto3.client(
    's3',
    endpoint_url=os.environ['S3_ENDPOINT'],
    aws_access_key_id=os.environ['S3_ACCESS_KEY'],
    aws_secret_access_key=os.environ['S3_SECRET_KEY'],
)

# Upload file
s3.upload_file('local-file.jpg', 'media-storage', 'uploads/image.jpg')

# Download file
s3.download_file('media-storage', 'uploads/image.jpg', 'downloaded.jpg')

Public URLs

Generate pre-signed URLs for temporary access:

import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const command = new GetObjectCommand({
  Bucket: 'media-storage',
  Key: 'uploads/private-document.pdf',
});

// URL expires in 1 hour
const url = await getSignedUrl(s3, command, { expiresIn: 3600 });

Service Management

Monitoring Service Health

View service metrics and health status in the dashboard:

  • CPU & Memory Usage: Real-time resource consumption
  • Connection Count: Active database/cache connections
  • Storage Usage: Disk space utilization
  • Uptime: Service availability percentage
  • Query Performance: Slow query detection (PostgreSQL)

CLI monitoring

# View service status
temps service status production-db

# View service logs
temps service logs production-db --tail 100

# View connection count
temps service stats production-db

Backups

All services include automatic daily backups retained for 7 days (configurable up to 90 days):

Backup management

# List backups
temps service backup list production-db

# Create manual backup
temps service backup create production-db --name "pre-migration"

# Restore from backup
temps service restore production-db --backup backup-20240115-120000

# Download backup
temps service backup download production-db \
  --backup backup-20240115-120000 \
  --output backup.sql.gz

Scaling Services

Scale your services as your needs grow:

Scale services

# Scale PostgreSQL storage
temps service scale production-db --storage 50GB

# Scale Redis memory
temps service scale session-store --memory 2GB

# Add read replicas
temps service scale production-db --replicas 3

Security Best Practices

  • Name
    Connection Security
    Description
    • Always use SSL/TLS connections (sslmode=require for PostgreSQL)
    • Store credentials in environment variables, never in code
    • Use the Temps-provided connection strings (automatically secured)
  • Name
    Network Access
    Description
    • Services are only accessible from projects within your Temps instance
    • Configure IP allowlists for external access if needed
    • Use private networking for service-to-service communication
  • Name
    Credential Rotation
    Description
    # Rotate database password
    temps service rotate-password production-db
    
    # Rotate S3 access keys
    temps service rotate-keys media-storage
    

    Connection strings are automatically updated in linked projects.

  • Name
    Audit Logging
    Description

    All service access is logged for security auditing:

    • Connection attempts (successful/failed)
    • Query execution (optional for PostgreSQL)
    • Object access (S3 operations)
    • Configuration changes

Pricing & Resource Limits

Recommended Resource Allocation:

Use CasePostgreSQLRedisS3 Storage
Small Project1GB RAM, 10GB storage256MB10GB
Medium Project2GB RAM, 50GB storage1GB100GB
Large Project4GB+ RAM, 200GB+ storage4GB+500GB+

Comparison with Other Platforms

FeatureTempsVercelNetlifyRailway
PostgreSQL✅ Built-in⚠️ Vercel Postgres (extra)❌ External only✅ Built-in
Redis✅ Built-in⚠️ Vercel KV (extra)❌ External only✅ Built-in
S3 Storage✅ MinIO-compatible⚠️ Vercel Blob (extra)❌ External only❌ External only
Connection Limits❌ No limits✅ Metered/capped✅ Metered/capped⚠️ Plan-based
Storage Cost❌ Your infrastructure✅ $0.15/GB/month✅ $0.10/GB/month✅ Pay per GB
Backups✅ Automatic⚠️ Enterprise only⚠️ Add-on✅ Automatic

Next Steps

Was this page helpful?