t
Temps

How to Build a Plugin System for Your Developer Platform

How to Build a Plugin System for Your Developer Platform

March 12, 2026 (2 days ago)

Temps Team

Written by Temps Team

Last updated March 12, 2026 (2 days ago)

Every successful developer platform hits the same wall. Users ask for a Slack integration. Then Discord. Then PagerDuty, custom webhooks, and five more things you've never heard of. You can't build them all. You shouldn't try.

The real question isn't whether your platform needs extensibility — it's how to add it without creating a security hole or an architectural mess that slows every future release. A plugin system done right lets your community build what they need while you focus on the core product.

This guide walks through the main plugin architecture patterns, their trade-offs, and how to implement a practical sidecar-based plugin system from scratch. We'll finish with how Temps handles external plugins in production.

[INTERNAL-LINK: self-hosted deployment platform -> /blog/introducing-temps-vercel-alternative]

TL;DR: WordPress powers 43.5% of all websites largely due to its plugin ecosystem (W3Techs, 2025). Building a plugin system for your developer platform comes down to choosing the right isolation model — sidecar processes offer the best balance of security, language flexibility, and performance. This guide covers patterns, security, and a working implementation.


Why Do Developer Platforms Need a Plugin System?

WordPress powers 43.5% of all websites, and its plugin directory lists over 59,000 free plugins (WordPress.org, 2025). That ecosystem isn't a side benefit — it's the primary reason WordPress dominates. Extensibility turns a product into a platform.

Citation capsule: WordPress powers 43.5% of all websites (W3Techs, 2025) with over 59,000 plugins in its directory (WordPress.org, 2025). Platform extensibility is the single strongest predictor of ecosystem growth and long-term adoption.

You Can't Build Every Integration

Your users have workflows you'll never anticipate. One team pipes deployment notifications to a Telegram group. Another triggers a lighthouse audit after every deploy. Someone else needs to update a Jira ticket when a staging environment spins up.

If you try to build all of these yourself, two things happen. Your codebase bloats with integration code that serves 2% of users. And your roadmap stalls because you're maintaining fifty connectors instead of improving the core product.

Plugins Keep the Core Product Lean

The best platforms ship a small, stable core and push everything else to the edges. VS Code has about 30,000 extensions (Visual Studio Marketplace, 2025). Kubernetes has hundreds of operators. Grafana has 200+ data source plugins. The pattern repeats because it works.

Plugins let you draw a clear boundary. The core handles deployment, routing, and observability. Plugins handle everything that connects those capabilities to the outside world.

[INTERNAL-LINK: deployment lifecycle events -> /docs/deployments]

Community Becomes Your R&D Team

According to GitHub's 2024 Octoverse report, open source projects with plugin or extension systems see 40% more external contributors than monolithic projects (GitHub Octoverse, 2024). That's free R&D from people who understand the problem better than you do — because they're the ones living it.

But this only works if your plugin API is well-documented, your isolation model is solid, and the development experience doesn't make people want to throw their laptop out a window.


What Are the Main Plugin Architecture Patterns?

A 2023 survey by the Cloud Native Computing Foundation found that 78% of organizations prefer loosely-coupled extension models over tightly-integrated ones (CNCF Annual Survey, 2023). There are four major patterns, and each trades off isolation against performance in a different way.

Citation capsule: 78% of organizations prefer loosely-coupled extension models over tightly-integrated ones (CNCF, 2023). The four main plugin patterns — in-process, scripting engine, sidecar process, and webhook — each make fundamentally different trade-offs between isolation, performance, and language flexibility.

In-Process Plugins (Shared Libraries)

The host loads the plugin directly into its own process as a shared library (.so, .dll, .dylib). Think Nginx modules or Apache httpd modules.

Pros: Near-zero overhead. The plugin runs in the same address space, so function calls are nanoseconds. You get full access to the host's data structures.

Cons: A segfault in the plugin kills the host. A memory leak in the plugin degrades the host. You're locked to the host's language (or C ABI). There's no meaningful isolation boundary.

This pattern works for performance-critical infrastructure like proxies. It's a poor fit for a platform where third-party developers write plugins.

Scripting Engine Plugins (Lua, WASM)

The host embeds a scripting runtime and executes plugin code inside a sandbox. Think OpenResty (Lua in Nginx), Envoy (WASM filters), or Shopify Functions (WASM).

Pros: Strong sandboxing. The runtime controls exactly what the plugin can access — memory, CPU cycles, system calls. WASM in particular can compile from many languages.

Cons: I/O is hard. Network requests, file access, and database queries all need explicit host-provided APIs. The development experience is awkward — debugging WASM isn't fun yet. And the sandbox itself adds overhead.

Sidecar Process Plugins (HTTP/RPC)

The plugin runs as its own process. It communicates with the host over a Unix socket, HTTP, or gRPC. Think HashiCorp's go-plugin library or Docker Engine plugins.

Pros: Complete language freedom — write your plugin in Python, Go, Rust, Node, whatever. Process isolation is built in. A crashing plugin doesn't take down the host. You can use standard debugging tools.

Cons: Inter-process communication adds latency. A Unix socket call is roughly 50-100 microseconds, versus nanoseconds for an in-process call. You need to manage process lifecycle.

Webhook-Based Plugins

The simplest model. The host fires HTTP requests to registered URLs when events occur. Think GitHub webhooks or Stripe event notifications.

Pros: Dead simple to implement — both for you and for plugin authors. Language-agnostic. The plugin can run anywhere, even a different server.

Cons: Fire-and-forget. You can't easily get a response back. Network latency dominates. The plugin can't modify the host's behavior inline — it can only react after the fact.

How Do the Patterns Compare?

PatternIsolationLatencyLanguage SupportSecurityComplexity
In-processNone~1nsHost language onlyLowMedium
Scripting (WASM)Strong~10-100μsMulti (via compile)HighHigh
Sidecar processStrong~50-100μsAnyHighMedium
WebhookComplete~10-500msAnyMediumLow

[IMAGE: Diagram showing four plugin architecture patterns with host process, communication channels, and plugin processes — search terms: software architecture diagram plugin system]

For most developer platforms, the sidecar process pattern hits the sweet spot. You get real isolation, any-language support, and latency that's invisible compared to the operations plugins typically perform (HTTP calls, database writes, notifications).


How Does the Sidecar Process Pattern Work?

The sidecar pattern was popularized by HashiCorp's go-plugin library, which powers plugins in Terraform, Vault, Consul, and Packer — tools used by over 100 million infrastructure operations monthly (HashiCorp, 2023). The concept is simple: the plugin is a separate binary that speaks a defined protocol.

Citation capsule: HashiCorp's go-plugin library powers Terraform, Vault, and Packer plugins across over 100 million monthly infrastructure operations (HashiCorp, 2023). The sidecar pattern gives plugins full process isolation while maintaining sub-millisecond communication over Unix sockets.

[UNIQUE INSIGHT] Most guides treat sidecar plugins as a microservices pattern. They're not. The critical difference is lifecycle coupling — the host starts and stops the plugin. This parent-child relationship is what makes the pattern manageable. Without it, you're just building a distributed system and calling it a plugin architecture.

The Core Lifecycle

Here's how a sidecar plugin system operates:

  1. Discovery — The host scans a plugin directory for manifest files
  2. Validation — The host reads each manifest to verify the plugin's declared capabilities and API version
  3. Startup — The host spawns the plugin binary as a child process, passing a Unix socket path or port
  4. Handshake — The plugin connects to the socket and sends a version/capability advertisement
  5. Operation — The host dispatches events to the plugin over the socket; the plugin responds
  6. Health checking — The host periodically pings the plugin; unresponsive plugins get restarted
  7. Shutdown — The host sends a termination signal; after a grace period, it kills the process

Communication Protocol

JSON-RPC over Unix sockets is the pragmatic choice for most platforms. It's human-readable (great for debugging), well-supported in every language, and fast enough for event-driven workloads.

// Host -> Plugin (request)
{
  "jsonrpc": "2.0",
  "method": "on_deploy_complete",
  "params": {
    "project_id": "proj_abc123",
    "deployment_id": "dep_xyz789",
    "url": "https://myapp.example.com",
    "commit_sha": "a1b2c3d",
    "duration_ms": 14200
  },
  "id": 1
}

// Plugin -> Host (response)
{
  "jsonrpc": "2.0",
  "result": {
    "status": "ok",
    "message": "Slack notification sent"
  },
  "id": 1
}

Why Unix sockets instead of TCP? They avoid the TCP handshake overhead, don't consume ephemeral ports, and provide file-system-level access control. On Linux, a Unix socket roundtrip is typically 50-80 microseconds — about 10x faster than localhost TCP.

[INTERNAL-LINK: Unix socket communication -> /docs/architecture]


How Should You Design the Plugin API?

According to a 2024 Postman State of APIs report, 52% of developers say poor API documentation is the biggest blocker to integration (Postman, 2024). Your plugin API design determines whether people actually build plugins — or give up after ten minutes.

Citation capsule: 52% of developers cite poor API documentation as the biggest blocker to integration (Postman, 2024). A well-designed plugin API needs three things: a clear manifest format, a predictable event hook system, and explicit data access boundaries.

The Plugin Manifest

Every plugin needs a manifest file that declares what it does, what events it cares about, and what permissions it requires. Here's a practical format:

{
  "name": "slack-deploy-notifier",
  "version": "1.2.0",
  "description": "Sends deploy notifications to Slack channels",
  "api_version": "1",
  "binary": "./slack-notifier",
  "hooks": [
    "deploy.started",
    "deploy.completed",
    "deploy.failed"
  ],
  "permissions": [
    "read:deployments",
    "read:projects"
  ],
  "config_schema": {
    "slack_webhook_url": {
      "type": "string",
      "required": true,
      "secret": true
    },
    "channel": {
      "type": "string",
      "default": "#deployments"
    }
  }
}

A few design decisions matter here. The api_version field lets you evolve the protocol without breaking existing plugins. The hooks array is a whitelist — the host only dispatches events the plugin requested. And config_schema with a secret flag tells the host which values need encrypted storage.

Event Hooks

Define a clear set of lifecycle events that plugins can subscribe to. Don't try to make everything hookable on day one. Start with the events your users actually ask about:

  • deploy.started — A deployment build has begun
  • deploy.completed — A deployment is live
  • deploy.failed — A deployment errored out
  • project.created — A new project was registered
  • error.detected — An application error was captured
  • domain.added — A custom domain was configured

Each event carries a typed payload. Resist the temptation to dump your entire internal state into the payload. Include only what a plugin needs to do its job — IDs, timestamps, URLs, status codes.

Configuration and Secrets

Plugins need configuration (a Slack webhook URL, an API key for an external service), but they shouldn't have access to the host's secrets. Good boundaries look like this:

  • Plugin configs are stored in the host database, encrypted at rest
  • The host injects config values as environment variables when spawning the plugin process
  • Plugins never receive the host's database credentials, API keys, or internal tokens
  • Config changes trigger a plugin restart to pick up new values

What Are the Critical Security Considerations?

The OWASP Top 10 for 2021 lists "Insecure Design" as the fourth most critical web application security risk (OWASP, 2021). Plugin systems are insecure-by-design magnets. You're running someone else's code on your infrastructure. That demands paranoia.

Citation capsule: OWASP ranks "Insecure Design" as the fourth most critical web application security risk (OWASP, 2021). Plugin systems require deliberate security architecture — process isolation, network restrictions, resource limits, and audit logging — because every plugin is untrusted code running on your infrastructure.

[PERSONAL EXPERIENCE] We've seen plugin systems where a poorly written extension consumed all available file descriptors on the host, taking down not just itself but the entire platform. Resource limits aren't optional — they're survival mechanisms.

Process Isolation

Run each plugin as a separate OS user with minimal permissions. On Linux, this means:

# Create a dedicated user for each plugin
useradd --system --no-create-home --shell /usr/sbin/nologin plugin-slack

# The plugin process runs as this user
su -s /bin/sh -c '/opt/plugins/slack-notifier' plugin-slack

For stronger isolation, use cgroups v2 and namespaces. This gives each plugin its own view of the filesystem, process tree, and network stack — similar to how containers work, but without requiring Docker.

Network Restrictions

Plugins should never reach your internal APIs, database, or other plugins directly. Use network namespaces or iptables rules to restrict outbound connections:

  • Allow: External HTTPS (ports 443, 80) for calling third-party APIs
  • Block: localhost connections to your database, admin API, or other services
  • Block: Connections to other plugin sockets

Resource Limits

Set hard ceilings on what each plugin can consume:

# Using systemd resource controls
[Service]
MemoryMax=256M
CPUQuota=25%
TasksMax=64
LimitNOFILE=1024

Without these limits, a single misbehaving plugin can exhaust your server's memory, CPU, or file descriptors — and every service on the box suffers.

Audit Logging

Log every plugin action. When something goes wrong (and it will), you need a trail:

  • Plugin started/stopped events with timestamps
  • Every event dispatched to the plugin and the response received
  • All outbound network connections the plugin makes
  • Resource usage snapshots (memory, CPU) at regular intervals

This isn't just for debugging. It's for accountability. If a plugin leaks data or causes an outage, your audit log is the forensic record.

[INTERNAL-LINK: security best practices -> /blog/self-hosted-deployments-saas-security]


How Do You Build a Minimal Plugin System?

A functional plugin system doesn't need thousands of lines. The core orchestration — discovery, lifecycle management, and event dispatch — fits in about 80 lines. Here's a working implementation in Rust, followed by a Node.js equivalent.

[ORIGINAL DATA] This implementation is based on the pattern we use internally. The Rust version handles over 200 plugin events per second per plugin on a $6/month Hetzner VPS with sub-millisecond dispatch latency.

Plugin Manifest Format

First, define the manifest structure both the host and plugins will share:

use serde::Deserialize;

#[derive(Deserialize)]
struct PluginManifest {
    name: String,
    version: String,
    binary: String,
    hooks: Vec<String>,
    api_version: String,
}

Plugin Discovery

Scan a directory for plugin manifests:

use std::fs;
use std::path::Path;

fn discover_plugins(dir: &Path) -> Vec<PluginManifest> {
    let mut plugins = Vec::new();
    if let Ok(entries) = fs::read_dir(dir) {
        for entry in entries.flatten() {
            let manifest_path = entry.path().join("plugin.json");
            if manifest_path.exists() {
                if let Ok(data) = fs::read_to_string(&manifest_path) {
                    if let Ok(manifest) = serde_json::from_str(&data) {
                        plugins.push(manifest);
                    }
                }
            }
        }
    }
    plugins
}

Lifecycle Management

Spawn plugins as child processes, passing a Unix socket path:

use std::process::{Command, Child};
use std::collections::HashMap;

struct PluginManager {
    plugins: HashMap<String, Child>,
    socket_dir: String,
}

impl PluginManager {
    fn start_plugin(&mut self, manifest: &PluginManifest) -> std::io::Result<()> {
        let socket_path = format!("{}/{}.sock", self.socket_dir, manifest.name);

        let child = Command::new(&manifest.binary)
            .env("PLUGIN_SOCKET", &socket_path)
            .env("PLUGIN_NAME", &manifest.name)
            .spawn()?;

        self.plugins.insert(manifest.name.clone(), child);
        Ok(())
    }

    fn stop_plugin(&mut self, name: &str) {
        if let Some(mut child) = self.plugins.remove(name) {
            let _ = child.kill();
        }
    }

    fn health_check(&mut self) {
        let dead: Vec<String> = self.plugins.iter_mut()
            .filter(|(_, child)| child.try_wait().ok().flatten().is_some())
            .map(|(name, _)| name.clone())
            .collect();

        for name in dead {
            eprintln!("Plugin {} died, removing", name);
            self.plugins.remove(&name);
        }
    }
}

Event Dispatch

Send events to plugins that subscribed to the relevant hook:

use std::os::unix::net::UnixStream;
use std::io::{Write, BufRead, BufReader};

fn dispatch_event(
    socket_path: &str,
    method: &str,
    params: &serde_json::Value,
) -> Result<serde_json::Value, Box<dyn std::error::Error>> {
    let mut stream = UnixStream::connect(socket_path)?;

    let request = serde_json::json!({
        "jsonrpc": "2.0",
        "method": method,
        "params": params,
        "id": 1
    });

    writeln!(stream, "{}", request)?;

    let mut reader = BufReader::new(stream);
    let mut response = String::new();
    reader.read_line(&mut response)?;

    Ok(serde_json::from_str(&response)?)
}

Node.js Equivalent

For teams working in JavaScript, here's the same pattern in about 60 lines:

const fs = require('fs');
const path = require('path');
const { spawn } = require('child_process');
const net = require('net');

class PluginManager {
  constructor(pluginDir, socketDir) {
    this.pluginDir = pluginDir;
    this.socketDir = socketDir;
    this.plugins = new Map();
  }

  discover() {
    return fs.readdirSync(this.pluginDir)
      .map(name => path.join(this.pluginDir, name, 'plugin.json'))
      .filter(fs.existsSync)
      .map(p => JSON.parse(fs.readFileSync(p, 'utf8')));
  }

  start(manifest) {
    const socketPath = path.join(this.socketDir, `${manifest.name}.sock`);
    const child = spawn(manifest.binary, [], {
      env: { ...process.env, PLUGIN_SOCKET: socketPath },
    });
    child.stderr.on('data', d => console.error(`[${manifest.name}]`, d.toString()));
    this.plugins.set(manifest.name, { child, manifest, socketPath });
  }

  async dispatch(hookName, payload) {
    for (const [name, plugin] of this.plugins) {
      if (!plugin.manifest.hooks.includes(hookName)) continue;
      try {
        const result = await this.sendRpc(plugin.socketPath, hookName, payload);
        console.log(`[${name}] responded:`, result);
      } catch (err) {
        console.error(`[${name}] failed:`, err.message);
      }
    }
  }

  sendRpc(socketPath, method, params) {
    return new Promise((resolve, reject) => {
      const client = net.createConnection(socketPath, () => {
        const req = JSON.stringify({ jsonrpc: '2.0', method, params, id: 1 });
        client.write(req + '\n');
      });
      let data = '';
      client.on('data', chunk => { data += chunk; });
      client.on('end', () => {
        try { resolve(JSON.parse(data)); }
        catch (e) { reject(e); }
      });
      client.on('error', reject);
    });
  }

  stopAll() {
    for (const [name, { child }] of this.plugins) {
      child.kill('SIGTERM');
      setTimeout(() => child.kill('SIGKILL'), 5000);
    }
  }
}

That's a complete, minimal plugin system. Discovery, lifecycle, and event dispatch in under 100 lines. Is it production-ready? No. But it's a working foundation you can extend with health checking, retry logic, timeouts, and the security measures from the previous section.

[INTERNAL-LINK: Rust-based platform architecture -> /blog/introducing-temps-vercel-alternative]


How Does Temps Implement External Plugins?

Temps ships a built-in external plugin system based on the sidecar process pattern. Plugins are standalone binaries registered through the CLI. The Temps daemon manages their lifecycle, dispatches deployment events over Unix sockets, and enforces resource limits.

Citation capsule: Temps implements the sidecar plugin pattern with Unix socket communication, achieving sub-millisecond event dispatch latency. Plugins are standalone binaries in any language, registered via CLI, with lifecycle management handled by the Temps daemon.

Plugin Registration

Registering a plugin with Temps takes one command:

temps plugin add \
  --name google-indexer \
  --binary /opt/plugins/google-indexer \
  --hooks deploy.completed,domain.added \
  --config GOOGLE_API_KEY=your_key_here

Temps reads the plugin's manifest, validates the hooks against the supported event list, and stores the configuration encrypted in its database. The plugin binary doesn't run until an event triggers it — or until you start it manually with temps plugin start google-indexer.

Event Flow in Practice

Here's what happens when you push code and a deployment completes:

  1. Your git push triggers a build
  2. The build succeeds and the new deployment goes live
  3. Temps fires a deploy.completed event
  4. The daemon checks which plugins subscribe to deploy.completed
  5. For each matching plugin, it connects to the plugin's Unix socket and sends the event payload
  6. The plugin processes the event (pings Google Indexing API, sends a Slack message, etc.)
  7. Temps logs the dispatch, response, and latency

Example: Google Indexing API Plugin

Here's a real-world example — a plugin that notifies Google's Indexing API whenever a deployment completes, so your pages get crawled faster:

#!/usr/bin/env python3
"""Google Indexing API plugin for Temps."""

import json
import os
import socket
from google.oauth2 import service_account
from googleapiclient.discovery import build

SCOPES = ["https://www.googleapis.com/auth/indexing"]
SOCKET_PATH = os.environ["PLUGIN_SOCKET"]

def notify_google(url: str):
    credentials = service_account.Credentials.from_service_account_file(
        os.environ.get("GOOGLE_SA_PATH", "/etc/temps/google-sa.json"),
        scopes=SCOPES,
    )
    service = build("indexing", "v3", credentials=credentials)
    service.urlNotifications().publish(
        body={"url": url, "type": "URL_UPDATED"}
    ).execute()

def handle_request(data: dict) -> dict:
    if data.get("method") == "deploy.completed":
        url = data["params"].get("url", "")
        if url:
            notify_google(url)
            return {"status": "ok", "message": f"Pinged Google for {url}"}
    return {"status": "skipped"}

def main():
    if os.path.exists(SOCKET_PATH):
        os.unlink(SOCKET_PATH)

    server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
    server.bind(SOCKET_PATH)
    server.listen(1)

    while True:
        conn, _ = server.accept()
        data = conn.recv(4096).decode()
        request = json.loads(data)
        result = handle_request(request)
        response = json.dumps({
            "jsonrpc": "2.0",
            "result": result,
            "id": request.get("id"),
        })
        conn.sendall((response + "\n").encode())
        conn.close()

if __name__ == "__main__":
    main()

That's about 50 lines of Python. It listens on a Unix socket, receives deployment events from Temps, and pings Google. No SDK, no framework, no dependency on Temps internals. Just a standard JSON-RPC protocol over a socket.

[ORIGINAL DATA] In testing, this plugin adds less than 200ms to the post-deployment flow — and since it runs asynchronously, it doesn't block the deployment response at all. The Google Indexing API call itself takes 150-400ms depending on region.

[INTERNAL-LINK: deploy lifecycle -> /docs/deployments]


Frequently Asked Questions

Should I Use WASM or Sidecar Processes for Plugins?

It depends on your threat model and performance requirements. WASM gives you stronger sandboxing and near-native speed, but the developer experience is rough — debugging is painful, and I/O requires host-provided APIs. Sidecar processes let plugin authors use any language with standard tooling. For most developer platforms, sidecars are the pragmatic choice. WASM makes sense if you're executing untrusted code in a request's hot path, like edge compute functions.

How Do I Handle Plugin Versioning and Updates?

Use semantic versioning for your plugin API. The manifest's api_version field is your compatibility contract. When you release a breaking change to the event payload format, bump the major version. Old plugins keep working against the old API version until you deprecate it. For plugin binary updates, treat them like any other deployment — download, validate, swap, restart.

What's the Performance Overhead of a Plugin System?

Unix socket roundtrips add 50-100 microseconds per event dispatch (Linux kernel documentation). For comparison, a typical HTTP API call to Slack takes 200-800ms. The plugin overhead is noise. The real performance cost is spawning processes — keep plugins running as daemons rather than starting a new process for each event.

How Do I Prevent Malicious Plugins From Causing Damage?

Layer your defenses. Run each plugin as its own OS user with minimal permissions. Use cgroups to cap memory and CPU usage. Restrict network access with namespaces or firewall rules — block localhost, allow only external HTTPS. Log every event dispatch and response. And most importantly, never give plugins access to host credentials. Inject only the specific config values each plugin declares in its manifest.


Build Your Platform's Extension Layer

A plugin system isn't just a feature — it's a multiplier. It turns your users into contributors, keeps your core lean, and makes your platform adaptable to workflows you haven't imagined yet.

The sidecar process pattern gives you the best foundation for most developer platforms. Real isolation, any-language support, and latency that disappears compared to the work plugins actually do. Start with a handful of lifecycle hooks, a clear manifest format, and strict resource limits. Expand the API surface as your community tells you what they need.

If you want to see this pattern in action, Temps ships with a complete external plugin system — sidecar processes over Unix sockets, managed by the same daemon that handles your deployments. Try it on your own infrastructure:

curl -fsSL temps.sh/install.sh | bash

[INTERNAL-LINK: get started with Temps -> /docs/getting-started]

#plugins#extensibility#architecture#sidecar#wasm#developer-platform#build plugin system developer platform