t
Temps

How to Run a Vulnerability Scanner on Your Deployed Apps

How to Run a Vulnerability Scanner on Your Deployed Apps

March 12, 2026 (2 days ago)

Temps Team

Written by Temps Team

Last updated March 12, 2026 (2 days ago)

You deploy your app, watch the green checkmark appear, and move on to the next feature. Security scanning? That's for enterprise teams with dedicated AppSec engineers. Except it isn't — and waiting until something breaks is how most breaches happen.

According to Verizon's 2024 Data Breach Investigations Report, exploitation of vulnerabilities as the initial access vector tripled year-over-year, accounting for 14% of all breaches (Verizon DBIR, 2024). The median time to exploit a vulnerability after disclosure is now just 5 days. Your deployed app — the one running right now — has attack surface you've never inspected.

This guide covers what vulnerability scanners actually check, which tools are worth your time, how to build a basic scanner yourself, and how to automate the whole process so you don't have to remember to do it.

[INTERNAL-LINK: what is Temps and why self-host -> /blog/introducing-temps-vercel-alternative]

TL;DR: Most web app breaches exploit known, patchable vulnerabilities. A vulnerability scanner checks your deployed apps for missing security headers, SSL misconfigurations, exposed secrets, and OWASP Top 10 issues. You can start with free tools like OWASP ZAP or Nuclei, or use a platform with built-in scanning. Vulnerability exploitation tripled as a breach vector in 2024 (Verizon DBIR, 2024).


What Does a Web Application Vulnerability Scanner Actually Check?

Web application vulnerability scanners found that 94% of applications tested had some form of broken access control, the number one item on the OWASP Top 10 (OWASP, 2021). A scanner doesn't just look for one thing — it systematically probes your deployed app for the most common and dangerous weaknesses.

Citation capsule: Web vulnerability scanners probe deployed applications for OWASP Top 10 issues, which appear in 94% of tested applications according to OWASP's own data (OWASP, 2021). These checks include security headers, SSL configuration, exposed paths, and injection points — the same weaknesses attackers target first.

Here's what a thorough scanner examines across your live deployment:

Security Headers

Your HTTP response headers are the first line of defense. Scanners check for:

  • Content-Security-Policy (CSP): Controls which scripts, styles, and resources can load. Without it, cross-site scripting (XSS) becomes trivial.
  • Strict-Transport-Security (HSTS): Forces HTTPS connections. Missing HSTS means someone on the same WiFi network can intercept traffic via SSL stripping.
  • X-Frame-Options / frame-ancestors: Prevents clickjacking. Without it, an attacker can embed your login page in a hidden iframe.
  • X-Content-Type-Options: Stops browsers from MIME-sniffing responses into executable types.
  • Referrer-Policy: Controls how much URL information leaks when users click outbound links.
  • Permissions-Policy: Restricts access to browser APIs like camera, microphone, and geolocation.

Most frameworks don't set any of these by default. That's why scanners flag them.

SSL/TLS Configuration

A valid certificate isn't enough. Scanners verify:

  • Certificate expiration dates and chain validity
  • Protocol versions (TLS 1.2 minimum; TLS 1.3 preferred)
  • Cipher suite strength — weak ciphers like RC4 or 3DES are still accepted by some servers
  • OCSP stapling and certificate transparency

Exposed Sensitive Paths

This is where scanners earn their keep. They probe paths like /.env, /.git/config, /wp-admin, /.DS_Store, /server-status, and /debug. You'd be surprised how often a .env file with database credentials is publicly accessible because someone forgot a .dockerignore entry.

Additional Checks

  • Cookie flags: Missing Secure, HttpOnly, or SameSite attributes
  • CORS configuration: Overly permissive Access-Control-Allow-Origin headers
  • Open redirects: Unvalidated redirect parameters that enable phishing
  • Information disclosure: Server version headers, stack traces in error responses, verbose error messages

[INTERNAL-LINK: securing your deployment environment -> /blog/self-hosted-deployments-saas-security]


How Does Automated Scanning Compare to Manual Testing?

Automated scanners detect up to 70% of known vulnerability types, but consistently miss business logic flaws that require human understanding of the application's purpose (Snyk State of Open Source Security, 2023). The two approaches aren't competing — they're complementary layers in a real security strategy.

Citation capsule: Automated vulnerability scanners can detect up to 70% of known vulnerability categories but miss business logic flaws, authentication bypasses, and race conditions that require contextual understanding (Snyk, 2023). Combining automated and manual testing provides the most complete coverage.

What Automated Scanning Catches

Automated tools excel at repetitive, pattern-based checks. They'll find every missing header, every expired certificate, every exposed .env file across hundreds of endpoints in minutes. They don't get bored, don't forget edge cases in their template library, and don't cost $200/hour.

Here's where automated scanning shines:

  • Missing or misconfigured security headers
  • Known CVEs in server software and dependencies
  • SSL/TLS weaknesses
  • Default credentials and exposed admin panels
  • SQL injection and XSS in standard form inputs
  • Outdated software versions with public exploits

What Only Manual Testing Finds

Automated tools can't understand intent. They won't notice that your "transfer funds" endpoint doesn't verify the sender owns the account. They won't catch that your discount code can be applied twice through a race condition. They won't realize your password reset flow leaks whether an email exists.

Manual penetration testing finds:

  • Business logic bypasses (price manipulation, privilege escalation)
  • Authentication and authorization flaws in multi-step flows
  • Race conditions and time-of-check-to-time-of-use bugs
  • Chained vulnerabilities that are harmless individually
  • Social engineering vectors

The Practical Approach

Run automated scans on every deployment. Budget for manual pen testing annually or before major launches. Automated scanning is your baseline — it catches the 80% of issues that attackers scan for with their own automated tools. But do both if your app handles sensitive data.

AspectAutomatedManual
SpeedMinutesDays to weeks
CostFree to ~$500/mo$5,000-$30,000+ per engagement
CoverageKnown patterns, signaturesLogic flaws, chained attacks
ConsistencySame checks every timeDepends on tester skill
When to useEvery deploy, weeklyAnnually, pre-launch, post-incident

[INTERNAL-LINK: implementing security in your deploy pipeline -> /blog/how-to-encrypt-environment-variables-at-rest]


Which Open-Source Scanning Tools Are Worth Using?

OWASP ZAP has over 13,000 GitHub stars and remains the most widely used open-source web application security scanner (OWASP ZAP GitHub, 2024). But it's not the only option, and the best tool depends on your use case, infrastructure, and how much RAM you're willing to spare.

Citation capsule: OWASP ZAP is the most popular open-source web security scanner with over 13,000 GitHub stars, but newer tools like Nuclei (22,000+ stars) offer faster template-based scanning for CI/CD pipelines (GitHub, 2024; GitHub, 2024).

OWASP ZAP

ZAP is the Swiss Army knife of web application scanning. It runs as a proxy between your browser and the target app, intercepting and analyzing every request. The active scanner sends crafted payloads to discover injection points, XSS vulnerabilities, and authentication issues.

Pros: Comprehensive coverage, GUI and headless modes, active community, extensive API for automation, HUD mode for manual testing.

Cons: Resource-hungry — plan for 1-2GB RAM minimum. Slow on large applications. Steep learning curve for advanced features. Active scanning can break things in production if you're not careful.

Best for: Deep, thorough scans of staging environments before production deployment.

Nuclei

Nuclei from ProjectDiscovery has over 22,000 GitHub stars and takes a fundamentally different approach (Nuclei GitHub, 2024). Instead of trying to discover vulnerabilities through fuzzing, it runs community-maintained YAML templates — over 9,000 of them — that check for specific, known issues.

Pros: Blazing fast. Low resource usage. Template-based means you know exactly what it's checking. Huge community template library. Easy to add custom checks. Perfect for CI/CD.

Cons: Only finds what templates exist for. No fuzzing or active exploitation. Won't discover novel vulnerabilities unique to your application.

Best for: Fast, repeatable scans in CI/CD pipelines and post-deployment checks.

Nikto

The veteran. Nikto has been around since 2001 and focuses on web server misconfigurations. It checks for dangerous files, outdated server software, and server-specific problems. It's not fancy, but it's thorough for what it does.

Pros: Simple to run. Good at finding server misconfigurations. Well-documented.

Cons: Noisy — generates a lot of HTTP requests. Slow. Limited to server-level checks. No modern webapp vulnerability detection.

Best for: Quick server-level audit alongside a more comprehensive scanner.

Mozilla Observatory

Not a scanner you install — it's a free web service that grades your site's security headers and TLS configuration. You can run it from the command line or the web interface. It focuses entirely on HTTP response headers and gives you a letter grade with specific fix recommendations.

Pros: Zero setup. Clear grading system. Actionable recommendations. API available.

Cons: Only checks headers and TLS. No path discovery, no active scanning, no vulnerability detection beyond configuration.

Best for: Quick header audit on any public URL.

Comparison at a Glance

ToolGitHub StarsFocusResource UseCI/CD Ready
OWASP ZAP13,000+Full web app scanningHigh (1-2GB)Yes (headless)
Nuclei22,000+Template-based checksLow (~100MB)Excellent
Nikto8,000+Server misconfigMediumBasic
Mozilla ObservatoryN/A (web service)Headers & TLSNoneAPI available

[IMAGE: Terminal screenshot showing Nuclei running security templates against a web application — nuclei security scanner terminal output]


Can You Build a Basic Security Scanner Yourself?

You don't need a full-blown tool to start checking your deployments. A basic security scanner that covers headers, SSL, and exposed paths takes roughly 100 lines of code. Building one yourself helps you understand exactly what the bigger tools are doing under the hood.

[ORIGINAL DATA]

Here's a practical Node.js scanner that checks the most critical issues:

// basic-security-scanner.js
const https = require('https');
const { URL } = require('url');

const SENSITIVE_PATHS = [
  '/.env', '/.git/config', '/.DS_Store',
  '/wp-admin', '/server-status', '/.well-known/security.txt',
  '/debug', '/api/docs', '/graphql',
  '/.svn/entries', '/backup.sql', '/phpinfo.php'
];

const REQUIRED_HEADERS = {
  'strict-transport-security': 'Missing HSTS — browsers can be downgraded to HTTP',
  'content-security-policy': 'Missing CSP — XSS attacks have no restriction',
  'x-frame-options': 'Missing X-Frame-Options — clickjacking possible',
  'x-content-type-options': 'Missing X-Content-Type-Options — MIME sniffing risk',
  'referrer-policy': 'Missing Referrer-Policy — URL data may leak to third parties',
  'permissions-policy': 'Missing Permissions-Policy — browser APIs unrestricted'
};

async function checkHeaders(url) {
  console.log('\n--- Security Headers ---');
  const res = await fetch(url);
  const headers = res.headers;
  let issues = 0;

  for (const [header, warning] of Object.entries(REQUIRED_HEADERS)) {
    if (!headers.get(header)) {
      console.log(`  FAIL: ${warning}`);
      issues++;
    } else {
      console.log(`  PASS: ${header} is set`);
    }
  }

  // Check for information disclosure
  const server = headers.get('server');
  if (server) {
    console.log(`  WARN: Server header exposes: "${server}"`);
  }

  const poweredBy = headers.get('x-powered-by');
  if (poweredBy) {
    console.log(`  WARN: X-Powered-By exposes: "${poweredBy}"`);
  }

  return issues;
}

async function checkSSL(hostname) {
  console.log('\n--- SSL/TLS ---');
  return new Promise((resolve) => {
    const req = https.request({ hostname, port: 443, method: 'HEAD' }, (res) => {
      const cert = res.socket.getPeerCertificate();
      const expiry = new Date(cert.valid_to);
      const daysLeft = Math.floor((expiry - Date.now()) / 86400000);

      if (daysLeft < 0) {
        console.log(`  FAIL: Certificate expired ${Math.abs(daysLeft)} days ago`);
      } else if (daysLeft < 30) {
        console.log(`  WARN: Certificate expires in ${daysLeft} days`);
      } else {
        console.log(`  PASS: Certificate valid for ${daysLeft} days`);
      }

      console.log(`  INFO: Issuer — ${cert.issuer?.O || 'Unknown'}`);
      console.log(`  INFO: Protocol — ${res.socket.getProtocol()}`);
      resolve(daysLeft < 0 ? 1 : 0);
    });
    req.on('error', (e) => {
      console.log(`  FAIL: SSL connection failed — ${e.message}`);
      resolve(1);
    });
    req.end();
  });
}

async function checkSensitivePaths(baseUrl) {
  console.log('\n--- Exposed Paths ---');
  let issues = 0;

  for (const path of SENSITIVE_PATHS) {
    try {
      const res = await fetch(`${baseUrl}${path}`, {
        redirect: 'manual',
        signal: AbortSignal.timeout(5000)
      });
      if (res.status === 200) {
        console.log(`  FAIL: ${path} returned 200 — likely exposed`);
        issues++;
      } else if (res.status !== 404) {
        console.log(`  WARN: ${path} returned ${res.status}`);
      }
    } catch {
      // Timeout or connection error — path likely doesn't exist
    }
  }

  if (issues === 0) {
    console.log('  PASS: No sensitive paths exposed');
  }
  return issues;
}

async function scan(url) {
  console.log(`\nScanning: ${url}\n${'='.repeat(50)}`);
  const { hostname } = new URL(url);

  const headerIssues = await checkHeaders(url);
  const sslIssues = await checkSSL(hostname);
  const pathIssues = await checkSensitivePaths(url);

  const total = headerIssues + sslIssues + pathIssues;
  console.log(`\n${'='.repeat(50)}`);
  console.log(`Total issues: ${total}`);
  console.log(total === 0 ? 'All checks passed.' : `Found ${total} issue(s) to fix.`);
}

scan(process.argv[2] || 'https://example.com');

Save it, run node basic-security-scanner.js https://your-app.com, and you've got immediate visibility into the basics. Is it comprehensive? No. Does it catch the issues that automated bots scan for every day? Yes.

[PERSONAL EXPERIENCE]

We've found that building even a basic scanner like this shifts your thinking. You start asking "what headers am I returning?" during development instead of after a security audit flags it. The scanner above won't replace ZAP or Nuclei, but it takes 30 seconds to run and catches the most commonly exploited misconfigurations.


How Can AI Improve Vulnerability Analysis?

IBM's 2024 Cost of a Data Breach Report found that organizations using security AI and automation extensively saved an average of $2.22 million per breach compared to those that didn't (IBM, 2024). AI doesn't replace scanning tools — it makes their output actually useful to developers who aren't security specialists.

Citation capsule: Organizations using AI and automation in their security workflows saved $2.22 million on average per data breach compared to those without, according to IBM's 2024 Cost of a Data Breach Report (IBM, 2024). AI transforms raw scan results into prioritized, actionable fix recommendations.

The Raw Output Problem

Here's what a typical Nuclei scan gives you: a wall of template IDs, severity labels, and matched patterns. A ZAP scan produces an HTML report with hundreds of entries grouped by risk level. For a security engineer, that's useful. For a developer trying to ship a feature, it's noise.

Raw scanner output looks like this:

[ssl-dns-names] [ssl] [info] example.com
[tech-detect:nginx] [http] [info] https://example.com
[missing-csp] [http] [info] https://example.com
[missing-hsts] [http] [low] https://example.com
[caa-fingerprint] [dns] [info] example.com
[cors-misconfig] [http] [medium] https://example.com/api
[open-redirect] [http] [medium] https://example.com/login?next=

Seven findings. Which one matters most? Which can wait? What do you actually type to fix each one? The scanner won't tell you.

Using LLMs for Contextual Prioritization

Feed scan results to an LLM and ask for a prioritized action plan. The model can:

  • Explain each finding in plain English. "Your site doesn't send a Content-Security-Policy header, which means any injected script can execute without restriction."
  • Rank by actual risk. An open redirect on a login page is more dangerous than a missing Permissions-Policy header. An LLM can weigh context.
  • Generate specific fix code. Not generic advice — actual Nginx config snippets, Next.js middleware, or Express middleware code for your stack.
  • Identify false positives. Some scanner findings are technically correct but not actually exploitable in your context. AI can flag these.

What AI Can't Do

Don't hand off your security decisions to a chatbot entirely. AI can misinterpret findings, generate fixes that break functionality, and miss nuanced attack chains. Use it as a triage layer between the scanner and your engineering effort — not as a replacement for understanding what the findings mean.

[UNIQUE INSIGHT]

The biggest value of AI in vulnerability scanning isn't finding new vulnerabilities. It's translation. Security scanners speak in CVE IDs, CWE numbers, and risk scores. Developers speak in "what do I change in my code." AI bridges that gap faster than any documentation can.

[INTERNAL-LINK: AI-powered developer tools -> /blog/ai-gateway-self-hosted-paas]


How Do You Integrate Scanning into Your Deploy Pipeline?

The Qualys 2024 TruRisk Research Report found that vulnerabilities with known exploits remained unpatched for an average of 30.6 days, giving attackers a month-long window (Qualys, 2024). Integrating scanning into your deployment pipeline closes that window automatically.

Citation capsule: Exploitable vulnerabilities remain unpatched for an average of 30.6 days according to Qualys research, creating a month-long attack window (Qualys, 2024). Automated post-deploy scanning catches these gaps before attackers do.

Post-Deploy Scanning

The most practical integration point is immediately after deployment. Your CI/CD pipeline deploys the app, waits for the health check, then runs a security scan against the live URL. If the scan finds critical issues, it alerts the team — or rolls back automatically.

Here's a basic GitHub Actions workflow:

# .github/workflows/security-scan.yml
name: Post-Deploy Security Scan
on:
  workflow_run:
    workflows: ["Deploy"]
    types: [completed]

jobs:
  scan:
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    steps:
      - name: Run Nuclei scan
        uses: projectdiscovery/nuclei-action@main
        with:
          target: https://your-app.com
          templates: technologies,misconfiguration,exposures
          severity: medium,high,critical
          output: nuclei-results.txt

      - name: Check results
        run: |
          if [ -s nuclei-results.txt ]; then
            echo "Security issues found:"
            cat nuclei-results.txt
            # Send to Slack, email, or fail the pipeline
            exit 1
          fi

Scheduled Recurring Scans

Deployments aren't the only time things change. SSL certificates expire. New CVE templates get published. Dependency vulnerabilities get disclosed. Run a full scan weekly — or daily if you handle sensitive data.

A cron-triggered workflow handles this:

on:
  schedule:
    - cron: '0 6 * * 1'  # Every Monday at 6 AM UTC

Preview Environment Scanning

If you deploy preview environments for pull requests, scan those too. Catching a security regression before it hits production is far cheaper than patching it after. Run a lightweight scan — headers and exposed paths — on every preview deployment. Save the deep scan for staging.

Alert on Regression

The critical pattern: track your security baseline. If Monday's scan found 3 issues and Tuesday's found 5, something regressed. Store results, diff them, and alert only on new findings. Otherwise, your team ignores the scanner after the first week.

[INTERNAL-LINK: automating deployment workflows -> /blog/how-to-cancel-stale-deployments-automatically]


How Does Temps Handle Vulnerability Scanning?

Temps includes a built-in security scanner that runs against your deployed applications without requiring any external tools, configuration, or additional services. It's part of the platform — not a third-party integration you need to set up.

When you scan a project in Temps, the scanner checks:

Headers, SSL, and Exposed Paths

Every scan covers the same ground as the tools described above — security headers (CSP, HSTS, X-Frame-Options, and more), SSL certificate validity and configuration, and probing for exposed sensitive paths. Results appear directly in your project dashboard with clear pass/fail indicators.

DNS Configuration

Temps also validates your DNS setup: CAA records, DNSSEC status, SPF/DKIM/DMARC for email-sending domains, and whether your nameserver configuration follows best practices. DNS misconfigurations are a common blind spot that most standalone scanners skip entirely.

AI-Powered Analysis

Raw scan results go through an AI analysis layer that does exactly what we described earlier in this guide — prioritizes findings by actual risk, explains each issue in plain English, and generates specific fix recommendations for your stack. Instead of parsing a wall of technical output, you get actionable items sorted by severity.

[PERSONAL EXPERIENCE]

We built the scanner into Temps because we were tired of the "deploy and forget" pattern ourselves. Every tool we used required separate setup, separate dashboards, and separate alerting. Having the scanner run automatically after each deployment — with results visible in the same place you check your build logs — meant we actually looked at the results. That's the whole point.

Running a Scan

From the Temps dashboard, you can trigger a scan for any deployed project. The scanner runs server-side, so there's nothing to install on your machine. Results persist in your project timeline alongside deployments, logs, and analytics.

For teams running Temps on their own infrastructure, the scanner is completely self-contained. No data leaves your server. No API keys to manage. No scan quota limits.

[INTERNAL-LINK: getting started with Temps -> /docs/getting-started]

[IMAGE: Temps dashboard showing vulnerability scan results with security score and fix recommendations — security scanner dashboard vulnerability results]


Frequently Asked Questions

How often should you run vulnerability scans on production apps?

Run automated scans after every deployment and at least weekly on a schedule. According to Qualys, exploitable vulnerabilities remain unpatched for an average of 30.6 days (Qualys, 2024). Weekly scans catch certificate expirations, new template matches, and configuration drift between deploys.

Do vulnerability scanners cause downtime or break production?

Passive scanners — those that only send normal HTTP requests and inspect responses — don't affect production at all. Active scanners like OWASP ZAP in attack mode can cause issues by sending malicious payloads to form inputs and API endpoints. Run active scans against staging environments, not production. Header checks and path probing are safe for production.

Are free scanners good enough or do you need paid tools?

Free tools cover the essentials. OWASP ZAP and Nuclei are used by professional pen testers and security teams worldwide. Paid tools like Qualys Web Application Scanning (starting around $6,000/year), Checkmarx, and Snyk add compliance reporting, SLA-backed support, and broader coverage. For most teams, free tools plus scheduled automation provide solid baseline coverage.

What's the difference between SAST, DAST, and SCA?

SAST (Static Application Security Testing) analyzes source code before deployment. DAST (Dynamic Application Security Testing) tests running applications — that's what this guide covers. SCA (Software Composition Analysis) checks your dependencies for known CVEs. You need all three eventually, but DAST gives you the most immediate value because it tests what attackers actually see.


Start Scanning Before Attackers Do

The gap between "deployed" and "secure" is where breaches happen. Verizon's data shows vulnerability exploitation tripled as a breach vector in 2024 — and the median time to exploit dropped to 5 days after disclosure (Verizon DBIR, 2024).

You don't need an enterprise security budget to close that gap. Start with the basics: check your headers with Mozilla Observatory, run Nuclei templates against your staging environment, and automate it in your CI pipeline. That alone puts you ahead of most teams.

If you want scanning built into your deployment workflow — with AI-powered analysis, no setup, and results in the same dashboard where you check your deploys — Temps includes it out of the box.

[INTERNAL-LINK: try Temps for free -> /docs/getting-started]

#security#vulnerability-scanning#dast#owasp-zap#nuclei#ci-cd#vulnerability scanner deployed apps