March 12, 2026 (1mo ago)
Written by Temps Team
Last updated March 12, 2026 (1mo ago)
You deploy your app, watch the green checkmark appear, and move on to the next feature. Security scanning? That's for enterprise teams with dedicated AppSec engineers. Except it isn't — and waiting until something breaks is how most breaches happen.
According to Verizon's 2024 Data Breach Investigations Report, exploitation of vulnerabilities as the initial access vector tripled year-over-year, accounting for 14% of all breaches. The median time to exploit a vulnerability after disclosure is now just 5 days. Your deployed app — the one running right now — has attack surface you've never inspected.
This guide covers what vulnerability scanners actually check, which tools are worth your time, how to build a basic scanner yourself, and how to automate the whole process so you don't have to remember to do it.
TL;DR: Most web app breaches exploit known, patchable vulnerabilities. A vulnerability scanner checks your deployed apps for missing security headers, SSL misconfigurations, exposed secrets, and OWASP Top 10 issues. You can start with free tools like OWASP ZAP or Nuclei, or use a platform with built-in scanning. Vulnerability exploitation tripled as a breach vector according to the Verizon DBIR.
Web application vulnerability scanners found that 94% of applications tested had some form of broken access control, the number one item on the OWASP Top 10. A scanner doesn't just look for one thing — it systematically probes your deployed app for the most common and dangerous weaknesses.
Here's what a thorough scanner examines across your live deployment:
Your HTTP response headers are the first line of defense. Scanners check for:
Most frameworks don't set any of these by default. That's why scanners flag them.
A valid certificate isn't enough. Scanners verify:
This is where scanners earn their keep. They probe paths like /.env, /.git/config, /wp-admin, /.DS_Store, /server-status, and /debug. You'd be surprised how often a .env file with database credentials is publicly accessible because someone forgot a .dockerignore entry.
Secure, HttpOnly, or SameSite attributesAccess-Control-Allow-Origin headersAutomated scanners detect up to 70% of known vulnerability types, but consistently miss business logic flaws that require human understanding of the application's purpose, according to the Snyk State of Open Source Security report. The two approaches aren't competing — they're complementary layers in a real security strategy.
Automated tools excel at repetitive, pattern-based checks. They'll find every missing header, every expired certificate, every exposed .env file across hundreds of endpoints in minutes. They don't get bored, don't forget edge cases in their template library, and don't cost $200/hour.
Here's where automated scanning shines:
Automated tools can't understand intent. They won't notice that your "transfer funds" endpoint doesn't verify the sender owns the account. They won't catch that your discount code can be applied twice through a race condition. They won't realize your password reset flow leaks whether an email exists.
Manual penetration testing finds:
Run automated scans on every deployment. Budget for manual pen testing annually or before major launches. Automated scanning is your baseline — it catches the 80% of issues that attackers scan for with their own automated tools. But do both if your app handles sensitive data.
| Aspect | Automated | Manual |
|---|---|---|
| Speed | Minutes | Days to weeks |
| Cost | Free to ~$500/mo | $5,000-$30,000+ per engagement |
| Coverage | Known patterns, signatures | Logic flaws, chained attacks |
| Consistency | Same checks every time | Depends on tester skill |
| When to use | Every deploy, weekly | Annually, pre-launch, post-incident |
OWASP ZAP has over 13,000 GitHub stars and remains the most widely used open-source web application security scanner. But it's not the only option, and the best tool depends on your use case, infrastructure, and how much RAM you're willing to spare.
ZAP is the Swiss Army knife of web application scanning. It runs as a proxy between your browser and the target app, intercepting and analyzing every request. The active scanner sends crafted payloads to discover injection points, XSS vulnerabilities, and authentication issues.
Pros: Comprehensive coverage, GUI and headless modes, active community, extensive API for automation, HUD mode for manual testing.
Cons: Resource-hungry — plan for 1-2GB RAM minimum. Slow on large applications. Steep learning curve for advanced features. Active scanning can break things in production if you're not careful.
Best for: Deep, thorough scans of staging environments before production deployment.
Nuclei from ProjectDiscovery has over 22,000 GitHub stars and takes a fundamentally different approach. Instead of trying to discover vulnerabilities through fuzzing, it runs community-maintained YAML templates — over 9,000 of them — that check for specific, known issues.
Pros: Blazing fast. Low resource usage. Template-based means you know exactly what it's checking. Huge community template library. Easy to add custom checks. Perfect for CI/CD.
Cons: Only finds what templates exist for. No fuzzing or active exploitation. Won't discover novel vulnerabilities unique to your application.
Best for: Fast, repeatable scans in CI/CD pipelines and post-deployment checks.
The veteran. Nikto has been around since 2001 and focuses on web server misconfigurations. It checks for dangerous files, outdated server software, and server-specific problems. It's not fancy, but it's thorough for what it does.
Pros: Simple to run. Good at finding server misconfigurations. Well-documented.
Cons: Noisy — generates a lot of HTTP requests. Slow. Limited to server-level checks. No modern webapp vulnerability detection.
Best for: Quick server-level audit alongside a more comprehensive scanner.
Not a scanner you install — it's a free web service that grades your site's security headers and TLS configuration. You can run it from the command line or the web interface. It focuses entirely on HTTP response headers and gives you a letter grade with specific fix recommendations.
Pros: Zero setup. Clear grading system. Actionable recommendations. API available.
Cons: Only checks headers and TLS. No path discovery, no active scanning, no vulnerability detection beyond configuration.
Best for: Quick header audit on any public URL.
| Tool | GitHub Stars | Focus | Resource Use | CI/CD Ready |
|---|---|---|---|---|
| OWASP ZAP | 13,000+ | Full web app scanning | High (1-2GB) | Yes (headless) |
| Nuclei | 22,000+ | Template-based checks | Low (~100MB) | Excellent |
| Nikto | 8,000+ | Server misconfig | Medium | Basic |
| Mozilla Observatory | N/A (web service) | Headers & TLS | None | API available |
[IMAGE: Terminal screenshot showing Nuclei running security templates against a web application — nuclei security scanner terminal output]
You don't need a full-blown tool to start checking your deployments. A basic security scanner that covers headers, SSL, and exposed paths takes roughly 100 lines of code. Building one yourself helps you understand exactly what the bigger tools are doing under the hood.
Here's a practical Node.js scanner that checks the most critical issues:
// basic-security-scanner.js
const https = require('https');
const { URL } = require('url');
const SENSITIVE_PATHS = [
'/.env', '/.git/config', '/.DS_Store',
'/wp-admin', '/server-status', '/.well-known/security.txt',
'/debug', '/api/docs', '/graphql',
'/.svn/entries', '/backup.sql', '/phpinfo.php'
];
const REQUIRED_HEADERS = {
'strict-transport-security': 'Missing HSTS — browsers can be downgraded to HTTP',
'content-security-policy': 'Missing CSP — XSS attacks have no restriction',
'x-frame-options': 'Missing X-Frame-Options — clickjacking possible',
'x-content-type-options': 'Missing X-Content-Type-Options — MIME sniffing risk',
'referrer-policy': 'Missing Referrer-Policy — URL data may leak to third parties',
'permissions-policy': 'Missing Permissions-Policy — browser APIs unrestricted'
};
async function checkHeaders(url) {
console.log('\n--- Security Headers ---');
const res = await fetch(url);
const headers = res.headers;
let issues = 0;
for (const [header, warning] of Object.entries(REQUIRED_HEADERS)) {
if (!headers.get(header)) {
console.log(` FAIL: ${warning}`);
issues++;
} else {
console.log(` PASS: ${header} is set`);
}
}
// Check for information disclosure
const server = headers.get('server');
if (server) {
console.log(` WARN: Server header exposes: "${server}"`);
}
const poweredBy = headers.get('x-powered-by');
if (poweredBy) {
console.log(` WARN: X-Powered-By exposes: "${poweredBy}"`);
}
return issues;
}
async function checkSSL(hostname) {
console.log('\n--- SSL/TLS ---');
return new Promise((resolve) => {
const req = https.request({ hostname, port: 443, method: 'HEAD' }, (res) => {
const cert = res.socket.getPeerCertificate();
const expiry = new Date(cert.valid_to);
const daysLeft = Math.floor((expiry - Date.now()) / 86400000);
if (daysLeft < 0) {
console.log(` FAIL: Certificate expired ${Math.abs(daysLeft)} days ago`);
} else if (daysLeft < 30) {
console.log(` WARN: Certificate expires in ${daysLeft} days`);
} else {
console.log(` PASS: Certificate valid for ${daysLeft} days`);
}
console.log(` INFO: Issuer — ${cert.issuer?.O || 'Unknown'}`);
console.log(` INFO: Protocol — ${res.socket.getProtocol()}`);
resolve(daysLeft < 0 ? 1 : 0);
});
req.on('error', (e) => {
console.log(` FAIL: SSL connection failed — ${e.message}`);
resolve(1);
});
req.end();
});
}
async function checkSensitivePaths(baseUrl) {
console.log('\n--- Exposed Paths ---');
let issues = 0;
for (const path of SENSITIVE_PATHS) {
try {
const res = await fetch(`${baseUrl}${path}`, {
redirect: 'manual',
signal: AbortSignal.timeout(5000)
});
if (res.status === 200) {
console.log(` FAIL: ${path} returned 200 — likely exposed`);
issues++;
} else if (res.status !== 404) {
console.log(` WARN: ${path} returned ${res.status}`);
}
} catch {
// Timeout or connection error — path likely doesn't exist
}
}
if (issues === 0) {
console.log(' PASS: No sensitive paths exposed');
}
return issues;
}
async function scan(url) {
console.log(`\nScanning: ${url}\n${'='.repeat(50)}`);
const { hostname } = new URL(url);
const headerIssues = await checkHeaders(url);
const sslIssues = await checkSSL(hostname);
const pathIssues = await checkSensitivePaths(url);
const total = headerIssues + sslIssues + pathIssues;
console.log(`\n${'='.repeat(50)}`);
console.log(`Total issues: ${total}`);
console.log(total === 0 ? 'All checks passed.' : `Found ${total} issue(s) to fix.`);
}
scan(process.argv[2] || 'https://example.com');
Save it, run node basic-security-scanner.js https://your-app.com, and you've got immediate visibility into the basics. Is it comprehensive? No. Does it catch the issues that automated bots scan for every day? Yes.
We've found that building even a basic scanner like this shifts your thinking. You start asking "what headers am I returning?" during development instead of after a security audit flags it. The scanner above won't replace ZAP or Nuclei, but it takes 30 seconds to run and catches the most commonly exploited misconfigurations.
IBM's 2024 Cost of a Data Breach Report found that organizations using security AI and automation extensively saved an average of $2.22 million per breach compared to those that didn't. AI doesn't replace scanning tools — it makes their output actually useful to developers who aren't security specialists.
Here's what a typical Nuclei scan gives you: a wall of template IDs, severity labels, and matched patterns. A ZAP scan produces an HTML report with hundreds of entries grouped by risk level. For a security engineer, that's useful. For a developer trying to ship a feature, it's noise.
Raw scanner output looks like this:
[ssl-dns-names] [ssl] [info] example.com
[tech-detect:nginx] [http] [info] https://example.com
[missing-csp] [http] [info] https://example.com
[missing-hsts] [http] [low] https://example.com
[caa-fingerprint] [dns] [info] example.com
[cors-misconfig] [http] [medium] https://example.com/api
[open-redirect] [http] [medium] https://example.com/login?next=
Seven findings. Which one matters most? Which can wait? What do you actually type to fix each one? The scanner won't tell you.
Feed scan results to an LLM and ask for a prioritized action plan. The model can:
Don't hand off your security decisions to a chatbot entirely. AI can misinterpret findings, generate fixes that break functionality, and miss nuanced attack chains. Use it as a triage layer between the scanner and your engineering effort — not as a replacement for understanding what the findings mean.
The biggest value of AI in vulnerability scanning isn't finding new vulnerabilities. It's translation. Security scanners speak in CVE IDs, CWE numbers, and risk scores. Developers speak in "what do I change in my code." AI bridges that gap faster than any documentation can.
The Qualys 2024 TruRisk Research Report found that vulnerabilities with known exploits remained unpatched for an average of 30.6 days, giving attackers a month-long window. Integrating scanning into your deployment pipeline closes that window automatically.
The most practical integration point is immediately after deployment. Your CI/CD pipeline deploys the app, waits for the health check, then runs a security scan against the live URL. If the scan finds critical issues, it alerts the team — or rolls back automatically.
Here's a basic GitHub Actions workflow:
# .github/workflows/security-scan.yml
name: Post-Deploy Security Scan
on:
workflow_run:
workflows: ["Deploy"]
types: [completed]
jobs:
scan:
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
- name: Run Nuclei scan
uses: projectdiscovery/nuclei-action@main
with:
target: https://your-app.com
templates: technologies,misconfiguration,exposures
severity: medium,high,critical
output: nuclei-results.txt
- name: Check results
run: |
if [ -s nuclei-results.txt ]; then
echo "Security issues found:"
cat nuclei-results.txt
# Send to Slack, email, or fail the pipeline
exit 1
fi
Deployments aren't the only time things change. SSL certificates expire. New CVE templates get published. Dependency vulnerabilities get disclosed. Run a full scan weekly — or daily if you handle sensitive data.
A cron-triggered workflow handles this:
on:
schedule:
- cron: '0 6 * * 1' # Every Monday at 6 AM UTC
If you deploy preview environments for pull requests, scan those too. Catching a security regression before it hits production is far cheaper than patching it after. Run a lightweight scan — headers and exposed paths — on every preview deployment. Save the deep scan for staging.
The critical pattern: track your security baseline. If Monday's scan found 3 issues and Tuesday's found 5, something regressed. Store results, diff them, and alert only on new findings. Otherwise, your team ignores the scanner after the first week.
Temps includes a built-in security scanner that runs against your deployed applications without requiring any external tools, configuration, or additional services. It's part of the platform — not a third-party integration you need to set up.
When you scan a project in Temps, the scanner checks:
Every scan covers the same ground as the tools described above — security headers (CSP, HSTS, X-Frame-Options, and more), SSL certificate validity and configuration, and probing for exposed sensitive paths. Results appear directly in your project dashboard with clear pass/fail indicators.
Temps also validates your DNS setup: CAA records, DNSSEC status, SPF/DKIM/DMARC for email-sending domains, and whether your nameserver configuration follows best practices. DNS misconfigurations are a common blind spot that most standalone scanners skip entirely.
Raw scan results go through an AI analysis layer that does exactly what we described earlier in this guide — prioritizes findings by actual risk, explains each issue in plain English, and generates specific fix recommendations for your stack. Instead of parsing a wall of technical output, you get actionable items sorted by severity.
We built the scanner into Temps because we were tired of the "deploy and forget" pattern ourselves. Every tool we used required separate setup, separate dashboards, and separate alerting. Having the scanner run automatically after each deployment — with results visible in the same place you check your build logs — meant we actually looked at the results. That's the whole point.
From the Temps dashboard, you can trigger a scan for any deployed project. The scanner runs server-side, so there's nothing to install on your machine. Results persist in your project timeline alongside deployments, logs, and analytics.
For teams running Temps on their own infrastructure, the scanner is completely self-contained. No data leaves your server. No API keys to manage. No scan quota limits.
[IMAGE: Temps dashboard showing vulnerability scan results with security score and fix recommendations — security scanner dashboard vulnerability results]
Run automated scans after every deployment and at least weekly on a schedule. According to Qualys, exploitable vulnerabilities remain unpatched for an average of 30.6 days. Weekly scans catch certificate expirations, new template matches, and configuration drift between deploys.
Passive scanners — those that only send normal HTTP requests and inspect responses — don't affect production at all. Active scanners like OWASP ZAP in attack mode can cause issues by sending malicious payloads to form inputs and API endpoints. Run active scans against staging environments, not production. Header checks and path probing are safe for production.
Free tools cover the essentials. OWASP ZAP and Nuclei are used by professional pen testers and security teams worldwide. Paid tools like Qualys Web Application Scanning (starting around $6,000/year), Checkmarx, and Snyk add compliance reporting, SLA-backed support, and broader coverage. For most teams, free tools plus scheduled automation provide solid baseline coverage.
SAST (Static Application Security Testing) analyzes source code before deployment. DAST (Dynamic Application Security Testing) tests running applications — that's what this guide covers. SCA (Software Composition Analysis) checks your dependencies for known CVEs. You need all three eventually, but DAST gives you the most immediate value because it tests what attackers actually see.
The gap between "deployed" and "secure" is where breaches happen. According to the Verizon DBIR, vulnerability exploitation tripled as a breach vector -- and the median time to exploit dropped to 5 days after disclosure.
You don't need an enterprise security budget to close that gap. Start with the basics: check your headers with Mozilla Observatory, run Nuclei templates against your staging environment, and automate it in your CI pipeline. That alone puts you ahead of most teams.
If you want scanning built into your deployment workflow — with AI-powered analysis, no setup, and results in the same dashboard where you check your deploys — Temps includes it out of the box.