t
Temps

Manage Your Entire Infrastructure from Claude, Cursor, or Any AI Agent

Manage Your Entire Infrastructure from Claude, Cursor, or Any AI Agent

February 18, 2026 (1w ago)

Temps Team

Written by Temps Team

Last updated February 18, 2026 (1w ago)

Deployment dashboards are great until you're switching between six browser tabs to find a deployment log, check analytics, and scale your staging environment.

What if your AI agent could do all of that from a single conversation?

Today we're releasing the Temps MCP Server — a Model Context Protocol integration that gives AI assistants like Claude, Cursor, and Windsurf direct access to your entire self-hosted infrastructure.

What Is MCP?

The Model Context Protocol is an open standard that lets AI assistants call tools on external systems. Instead of copy-pasting URLs and reading JSON responses, your AI agent makes structured API calls and formats the results for you.

With the Temps MCP Server, your AI agent becomes a full infrastructure operator.


224 Tools Across 30 Categories

The MCP server covers every part of the Temps platform:

CategoryWhat you can do
DeploymentsTrigger, roll back, pause, resume, cancel. Stream build logs with parsed pipeline stages.
EnvironmentsCreate staging/production environments, manage env vars, scale replicas, update resource limits.
AnalyticsQuery visitors, sessions, page views, bounce rates. Break down by country, browser, device, UTM parameters, referrer, and 15+ other properties.
ContainersList, start, stop, restart. Get resource metrics and runtime logs.
Domains & SSLAdd custom domains, verify DNS, renew SSL certificates, manage ACME challenges.
ServicesProvision and manage PostgreSQL, Redis, S3, and MongoDB.
BackupsConfigure schedules, manage S3 sources, trigger manual backups.
MonitoringCreate uptime monitors, check status, view response time history.
Error TrackingList error groups, view stack traces, get error dashboards and stats.
SecurityRun vulnerability scans, manage IP access rules, configure security policies.
And moreWebhooks, DNS providers, funnels, incidents, audit logs, notifications, load balancer routes, proxy logs, API keys, users.

Load Only What You Need

224 tools is a lot of context for an AI agent. That's why the server supports category filtering — load only the tools relevant to your workflow:

{
  "mcpServers": {
    "temps": {
      "command": "npx",
      "args": ["@temps-sdk/mcp", "--tools", "deployments,analytics,projects"],
      "env": {
        "TEMPS_API_URL": "https://your-instance.example.com",
        "TEMPS_API_KEY": "tk_..."
      }
    }
  }
}

This loads ~33 tools instead of 224. Your AI agent responds faster, selects the right tool more reliably, and uses less context.

You can also filter via environment variable:

TEMPS_MCP_TOOLS=deployments,analytics npx @temps-sdk/mcp

Run npx @temps-sdk/mcp --help to see all 30 available categories.


Real Conversations

Here's what it looks like in practice.

"What's the bounce rate on our pricing page?"

You: What's the bounce rate on /pricing for the last 7 days?

Agent: [calls get_page_detail with page_path="/pricing"]

Page Detail: /pricing

| Metric           | Value  |
| ---              | ---    |
| Unique Visitors  | 1,847  |
| Total Page Views | 2,312  |
| Avg Time on Page | 42.3s  |
| Bounce Rate      | 68.2%  |
| Entry Rate       | 31.5%  |
| Exit Rate        | 45.1%  |

Top Referrers:
| Referrer          | Visits | %     |
| google.com        | 892    | 38.6% |
| twitter.com       | 341    | 14.7% |
| (direct)          | 289    | 12.5% |

"Deploy to production and show me the logs"

You: Deploy my-app to production and stream the build logs

Agent: [calls trigger_deployment]
       [calls get_deployment_logs]

Deployment #87 — Pipeline Stages

| # | Stage        | Status     | Duration |
| 1 | Clone        | success    | 3s       |
| 2 | Build        | success    | 45s      |
| 3 | Push Image   | success    | 12s      |
| 4 | Deploy       | success    | 8s       |
| 5 | Health Check | success    | 5s       |

All stages passed. Deployment is live.

"Break down visitors by country this week"

You: Show me a breakdown of visitors by country for project 2

Agent: [calls get_property_breakdown with group_by="country"]

Country Breakdown — Project 2 (Total: 4,231)

| Country        | Count | %     |
| United States  | 1,456 | 34.4% |
| Germany        | 523   | 12.4% |
| United Kingdom | 412   | 9.7%  |
| France         | 287   | 6.8%  |
| Canada         | 245   | 5.8%  |

"Scale staging to 3 replicas"

You: Scale the staging environment to 3 replicas

Agent: [calls scale_environment with replicas=3]

Environment scaled. Staging now running 3 replicas.

Setup in 3 Steps

1. Get an API Key

Create one from your Temps dashboard under Settings > API Keys, or via CLI:

bunx @temps-sdk/cli api-keys create --name "mcp-agent" --role admin

2. Configure Your AI Client

Claude Desktop — add to claude_desktop_config.json:

{
  "mcpServers": {
    "temps": {
      "command": "npx",
      "args": ["@temps-sdk/mcp"],
      "env": {
        "TEMPS_API_URL": "https://your-temps-instance.com",
        "TEMPS_API_KEY": "tk_your_api_key"
      }
    }
  }
}

Claude Code — add to .mcp.json in your project root:

{
  "mcpServers": {
    "temps": {
      "command": "npx",
      "args": ["@temps-sdk/mcp"],
      "env": {
        "TEMPS_API_URL": "https://your-temps-instance.com",
        "TEMPS_API_KEY": "tk_your_api_key"
      }
    }
  }
}

Cursor — add to .cursor/mcp.json:

{
  "mcpServers": {
    "temps": {
      "command": "npx",
      "args": ["@temps-sdk/mcp", "--tools", "deployments,analytics"],
      "env": {
        "TEMPS_API_URL": "https://your-temps-instance.com",
        "TEMPS_API_KEY": "tk_your_api_key"
      }
    }
  }
}

Also works with Windsurf, Cline, OpenCode, OpenAI Codex, and any other MCP-compatible client.

3. Start Asking

No special syntax. Just ask your AI agent to manage your infrastructure in natural language.


Analytics Deep Dive

The 14 analytics tools deserve special attention. You can query your traffic data the same way you'd ask a colleague:

  • "How many unique visitors did we get today?"get_unique_counts
  • "Show me hourly traffic for the last week"get_hourly_visits
  • "What are the top pages?"get_page_paths
  • "What's the bounce rate on /docs?"get_page_detail
  • "Where is our traffic coming from?"get_property_breakdown with group_by=referrer_hostname
  • "Which browsers are our users on?"get_property_breakdown with group_by=browser
  • "Show me UTM campaign performance"get_property_breakdown with group_by=utm_campaign
  • "Where are users dropping off?"get_page_flow
  • "What's our overall engagement rate?"get_general_stats
  • "Who's on the site right now?"get_active_visitors

21 properties available for breakdowns: country, city, region, browser, device_type, operating_system, referrer_hostname, page_path, channel, language, utm_source, utm_medium, utm_campaign, utm_term, utm_content, and more.


Bug Fixes in This Release

This release also includes fixes that improve reliability:

  • Environment scalingscale_environment and update_environment_resources now use the correct HTTP method (PUT instead of PATCH). Resource fields corrected from string to integer types.
  • Deployment logsget_deployment_logs now properly parses JSONL responses and renders a pipeline stage summary with formatted log output.
  • CLI package name — Documentation corrected from @temps/cli to @temps-sdk/cli.

Open Source

The MCP server is part of the Temps ecosystem and fully open source. Inspect the code, contribute tools, or fork it for your own platform.


The Temps MCP Server is available now. Install with npx @temps-sdk/mcp and start managing your infrastructure from any AI agent.

#mcp#ai#claude#cursor#automation#analytics#infrastructure