Skip to main content

Security Testing

Security testing validates that applications protect data, resist attacks, and enforce security policies through automated tools, manual testing, implementation verification, and continuous monitoring.

Overview

Security testing validates that applications protect data, resist attacks, and enforce security policies through automated tools, manual testing, and continuous monitoring. While functional testing verifies correct behavior, security testing verifies secure behavior - ensuring the application does what it should while preventing what it shouldn't.

Key aspects covered:

  • Automated Testing: SAST, DAST, dependency scanning, secret scanning
  • Manual Assessment: Penetration testing and threat modeling
  • Implementation Testing: Integration tests for authentication, authorization, and input validation
  • Runtime Monitoring: Real-time security event detection
  • Incident Response: Systematic breach detection and recovery procedures
Related Topics

See also: Security Overview for security principles, Authentication and Authorization for controls to test, Input Validation for injection prevention, and Testing Strategy for overall testing approach.

Security Mindset Required

Automated tools catch many vulnerabilities, but developers must understand security principles to write secure code. Review Security Overview for foundational knowledge. Security is everyone's responsibility.


Core Principles

  • Shift Left Security: Integrate security testing early in development, not just before release
  • Defense in Depth: Layer multiple security controls - no single point of failure
  • Automated Scanning: Run security scans continuously in CI/CD pipelines
  • Implementation Verification: Test that security controls work as implemented through integration tests
  • Continuous Monitoring: Monitor production for security events and respond systematically
  • Fail Securely: Design systems to fail closed, denying access rather than allowing it
  • Least Privilege: Grant minimum necessary permissions to users and services
  • Regular Updates: Patch dependencies promptly when vulnerabilities are disclosed

Security Testing Pyramid

The security testing pyramid balances frequency, cost, and coverage. Fast tests (unit, integration, SAST) run on every commit, providing immediate feedback. Dependency scanning runs daily as new CVEs are published. DAST runs weekly since it requires a running application. Manual penetration testing occurs quarterly to find complex vulnerabilities and business logic flaws that automated tools miss.

Security Testing Levels

LevelTypeFrequencyToolsCoverage
UnitSecurity unit testsEvery commitJUnit, JestAuthentication logic, validation
IntegrationSecurity integration testsEvery commitTestContainers, MockMvcAuth flows, authorization
SASTStatic code analysisEvery commitSonarQube, Semgrep, ESLintCode vulnerabilities
DependencyVulnerability scanningDailyOWASP Dependency Check, SnykKnown CVEs
DASTDynamic analysisWeeklyOWASP ZAPRunning application
PenetrationManual testingQuarterlyManual + toolsFull application

Static Application Security Testing (SAST)

SAST tools analyze source code, bytecode, or binaries to identify security vulnerabilities without executing the code. They find issues like SQL injection, cross-site scripting (XSS), hardcoded secrets, insecure cryptography, and improper input validation by examining code patterns and data flows.

How SAST Works

SAST tools build an abstract representation of your code (abstract syntax tree, control flow graph) and trace data flow from inputs to sensitive operations. If user input flows into a SQL query without sanitization, SAST flags it as SQL injection risk. If sensitive data like passwords appear in log statements, SAST detects potential information exposure.

Advantages of SAST:

  • Early Detection: Catches issues during development before code runs
  • Complete Coverage: Analyzes all code paths, including rarely executed branches
  • Root Cause: Points to exact code locations causing vulnerabilities
  • Fast Feedback: Runs in seconds to minutes, suitable for CI/CD

Limitations of SAST:

  • False Positives: May flag safe code as vulnerable, requiring manual review
  • Configuration Issues: Misses runtime configuration vulnerabilities (weak TLS, open ports)
  • Context Blind: Can't understand complex business logic or authentication flows
  • Language Support: Quality varies significantly across programming languages

Treat SAST findings triage as a first-class workflow:

  • Classify findings as real, acceptable risk, or false positive.
  • Require a short justification for suppressed findings in code review.
  • Revisit suppressed findings periodically; rules and code context evolve.

SonarQube for Java Security Analysis

SonarQube provides comprehensive SAST for Java, analyzing code quality and security together. It integrates into CI/CD pipelines and provides dashboards tracking security debt over time.

// build.gradle - SonarQube integration
plugins {
id 'org.sonarqube' version '4.4.1.3373'
id 'jacoco'
}

sonarqube {
properties {
property 'sonar.projectKey', 'payment-service'
property 'sonar.projectName', 'Payment Service'
property 'sonar.host.url', 'https://sonarqube.company.com'
property 'sonar.token', System.getenv('SONAR_TOKEN')

// Security-specific configuration
property 'sonar.security.hotspots', 'true'
property 'sonar.java.binaries', 'build/classes'
property 'sonar.coverage.jacoco.xmlReportPaths', 'build/reports/jacoco/test/jacocoTestReport.xml'

// Fail CI if security issues exceed thresholds
property 'sonar.qualitygate.wait', 'true'
}
}

// Run SonarQube analysis
// ./gradlew sonarqube

Common Java vulnerabilities SonarQube detects:

  • SQL injection through string concatenation
  • Path traversal from unsanitized file paths
  • Insecure cryptography (weak algorithms, hardcoded keys)
  • Information exposure through exception messages or logs
  • Resource exhaustion from unbounded collections
  • Insecure deserialization
//  BAD: SQL Injection vulnerability - SonarQube flags this
public User findUserByUsername(String username) {
String sql = "SELECT * FROM users WHERE username = '" + username + "'";
return jdbcTemplate.queryForObject(sql, new UserRowMapper());
}

// GOOD: Secure parameterized query
public User findUserByUsername(String username) {
String sql = "SELECT * FROM users WHERE username = ?";
return jdbcTemplate.queryForObject(sql, new UserRowMapper(), username);
}
//  BAD: Path traversal vulnerability - user controls file path
@GetMapping("/files/{filename}")
public ResponseEntity<Resource> downloadFile(@PathVariable String filename) {
Path filePath = Paths.get("/data/files/" + filename); // Dangerous!
Resource resource = new FileSystemResource(filePath);
return ResponseEntity.ok(resource);
}

// GOOD: Validate and sanitize file paths
@GetMapping("/files/{filename}")
public ResponseEntity<Resource> downloadFile(@PathVariable String filename) {
// Reject suspicious patterns
if (filename.contains("..") || filename.contains("/") || filename.contains("\\")) {
throw new IllegalArgumentException("Invalid filename");
}

Path filePath = Paths.get("/data/files").resolve(filename).normalize();

// Verify resolved path is within allowed directory
if (!filePath.startsWith("/data/files")) {
throw new SecurityException("Access denied");
}

Resource resource = new FileSystemResource(filePath);
return ResponseEntity.ok(resource);
}

Semgrep for Pattern-Based Security Scanning

Semgrep provides lightweight, fast SAST using pattern-based rules. Unlike heavyweight tools that require full compilation, Semgrep analyzes source code directly using patterns that look like the code you're searching for. This makes it ideal for CI/CD where speed matters.

# .semgrep.yml - Custom security rules
rules:
- id: spring-sql-injection
patterns:
- pattern: jdbcTemplate.query($SQL, ...)
- pattern-not: jdbcTemplate.query("...", ...)
- metavariable-regex:
metavariable: $SQL
regex: .*\+.*
message: Potential SQL injection from string concatenation
severity: ERROR
languages: [java]
metadata:
category: security
cwe: "CWE-89: SQL Injection"
owasp: "A03:2021 - Injection"

- id: hardcoded-secret
patterns:
- pattern-either:
- pattern: password = "..."
- pattern: apiKey = "..."
- pattern: secretKey = "..."
message: Hardcoded secret detected
severity: ERROR
languages: [java, typescript]
metadata:
category: security
cwe: "CWE-798: Use of Hard-coded Credentials"

- id: insecure-random
pattern: new Random()
message: Use SecureRandom for security-sensitive operations
severity: WARNING
languages: [java]
metadata:
category: security
cwe: "CWE-330: Use of Insufficiently Random Values"
# Run Semgrep with pre-built rulesets
semgrep --config=auto . # Auto-detect and run relevant rules

# Run specific rulesets
semgrep --config=p/owasp-top-ten .
semgrep --config=p/security-audit .

# Run custom rules
semgrep --config=.semgrep.yml .

# CI/CD integration - fail on errors
semgrep --config=auto --error --strict .

Semgrep advantages:

  • Fast: Analyzes code in seconds without compilation
  • Customizable: Write rules for project-specific security patterns
  • Low false positives: Pattern matching is precise when rules are well-written
  • Multi-language: Supports Java, TypeScript, Python, Go, and more

TypeScript Security Scanning with ESLint Security Plugins

TypeScript applications need security scanning too - XSS vulnerabilities, insecure APIs, and prototype pollution affect frontend and backend JavaScript code.

// package.json
{
"devDependencies": {
"eslint": "^8.56.0",
"eslint-plugin-security": "^1.7.1",
"eslint-plugin-no-secrets": "^0.8.9",
"@typescript-eslint/eslint-plugin": "^6.19.0",
"@typescript-eslint/parser": "^6.19.0"
}
}
// .eslintrc.js
module.exports = {
parser: '@typescript-eslint/parser',
plugins: ['@typescript-eslint', 'security', 'no-secrets'],
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:security/recommended',
],
rules: {
// Security rules
'security/detect-object-injection': 'error',
'security/detect-non-literal-regexp': 'warn',
'security/detect-unsafe-regex': 'error',
'security/detect-buffer-noassert': 'error',
'security/detect-eval-with-expression': 'error',
'security/detect-no-csrf-before-method-override': 'error',
'security/detect-possible-timing-attacks': 'warn',
'no-secrets/no-secrets': 'error',

// TypeScript-specific security patterns
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/no-unsafe-assignment': 'error',
'@typescript-eslint/no-unsafe-call': 'error',
'@typescript-eslint/no-unsafe-member-access': 'error',
},
};
//  BAD: XSS vulnerability - unsanitized user input in DOM
function displayUserComment(comment: string) {
document.getElementById('comment')!.innerHTML = comment; // Dangerous!
}

// GOOD: Sanitize HTML or use textContent
import DOMPurify from 'dompurify';

function displayUserComment(comment: string) {
const sanitized = DOMPurify.sanitize(comment);
document.getElementById('comment')!.innerHTML = sanitized;
}

// Or avoid HTML entirely
function displayUserComment(comment: string) {
document.getElementById('comment')!.textContent = comment;
}

Dynamic Application Security Testing (DAST)

DAST tools test running applications by simulating attacks from an external perspective, like a hacker would. They send malicious payloads, attempt authentication bypasses, and probe for misconfigurations without access to source code.

How DAST Works

DAST tools crawl your application to discover pages, forms, and APIs, then fuzz inputs with attack payloads to find vulnerabilities. They observe responses to detect successful exploits - SQL errors revealing database structure, reflected XSS payloads executing JavaScript, authentication bypasses accessing restricted resources.

Advantages of DAST:

  • Runtime Issues: Finds vulnerabilities in running applications including configuration errors
  • Black Box: No source code access needed - tests like an attacker would
  • Framework Agnostic: Works regardless of programming language or framework
  • Proof of Concept: Provides actual exploit payloads demonstrating impact

Limitations of DAST:

  • Limited Coverage: Only tests accessible code paths - can't reach code behind authentication without credentials
  • Slow: Comprehensive scans take hours, too slow for every commit
  • False Negatives: May miss vulnerabilities in untested code paths
  • Late Detection: Finds issues late in development lifecycle

OWASP ZAP for Automated Security Testing

OWASP ZAP (Zed Attack Proxy) is a free, open-source DAST tool widely used for web application security testing. It provides automated scanning, passive analysis, and manual testing capabilities.

# .gitlab-ci.yml - OWASP ZAP integration
zap-scan:
stage: security
image: owasp/zap2docker-stable
services:
- name: postgres:16
alias: postgres
variables:
ZAP_TARGET: "http://localhost:8080"
script:
# Start application
- apt-get update && apt-get install -y default-jdk
- ./gradlew bootRun &
- sleep 60 # Wait for app startup

# Run ZAP baseline scan (passive + spider)
- zap-baseline.py -t $ZAP_TARGET -r zap-report.html -J zap-report.json

# For authenticated scan, provide API key or session token
# - zap-full-scan.py -t $ZAP_TARGET -z "-config api.key=$ZAP_API_KEY" -r zap-report.html
artifacts:
paths:
- zap-report.html
- zap-report.json
when: always
expire_in: 30 days
allow_failure: true # Don't block initially; review findings first
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
- if: '$CI_PIPELINE_SOURCE == "schedule"'

ZAP scan types:

  • Baseline scan: Fast passive scan plus spider (crawl). Quick feedback, low false positives. Run nightly.
  • Full scan: Active attack with fuzzing. Comprehensive but slow (hours). Run weekly.
  • API scan: OpenAPI/Swagger specification-based testing. Tests all API endpoints automatically.
# Run ZAP baseline scan locally
docker run -v $(pwd):/zap/wrk:rw \
owasp/zap2docker-stable zap-baseline.py \
-t http://localhost:8080 \
-r zap-report.html

# Run ZAP with OpenAPI specification
docker run -v $(pwd):/zap/wrk:rw \
owasp/zap2docker-stable zap-api-scan.py \
-t http://localhost:8080/v3/api-docs \
-f openapi \
-r zap-report.html

# Authenticated scan with stored session
docker run -v $(pwd):/zap/wrk:rw \
owasp/zap2docker-stable zap-full-scan.py \
-t http://localhost:8080 \
-n session.context \
-r zap-report.html

Common Vulnerabilities DAST Detects

SQL Injection: DAST sends SQL metacharacters (' OR '1'='1, '; DROP TABLE users--) in input fields and observes responses. Database errors or changed behavior indicate SQL injection.

Cross-Site Scripting (XSS): DAST injects JavaScript payloads (<script>alert('XSS')</script>, <img src=x onerror=alert('XSS')>) and checks if they execute in responses. Reflected XSS appears immediately; stored XSS requires crawling to find where injected content displays.

Authentication and Authorization Flaws: DAST attempts to access protected resources without authentication, tests password reset flows for token predictability, and checks if users can access other users' data by manipulating IDs.

Security Misconfigurations: DAST identifies insecure HTTP headers (missing Content-Security-Policy, X-Frame-Options), exposed sensitive files (.git, .env, backup files), verbose error messages leaking implementation details, and outdated software versions with known vulnerabilities.

Cross-Site Request Forgery (CSRF): DAST checks if state-changing operations (payment submission, account modification) require CSRF tokens. Missing CSRF protection allows attackers to trick users into unwanted actions.


Dependency Scanning

Modern applications depend on hundreds of third-party libraries. Vulnerabilities in dependencies affect your application even if your code is secure. Dependency scanning identifies known vulnerabilities (CVEs) in your dependencies so you can update or mitigate them before attackers exploit them.

How Dependency Scanning Works

Vulnerability databases (National Vulnerability Database, GitHub Advisory Database, npm Advisory Database) track known vulnerabilities with CVE identifiers. Dependency scanners compare your dependencies against these databases to find matches. When a match is found, the scanner reports the vulnerability, severity (CVSS score), and recommended fixes.

Dependabot for GitHub/GitLab

Dependabot automatically creates pull requests to update vulnerable dependencies. It monitors your repositories, detects vulnerable versions, and proposes updates with minimal manual intervention.

# .github/dependabot.yml (GitHub) or .gitlab/dependabot.yml (GitLab)
version: 2
updates:
# Java/Gradle dependencies
- package-ecosystem: "gradle"
directory: "/"
schedule:
interval: "weekly"
day: "monday"
open-pull-requests-limit: 10
reviewers:
- "security-team"
labels:
- "dependencies"
- "security"
# Only create PRs for security updates
open-pull-requests-limit: 5
# Group minor/patch updates together
groups:
minor-and-patch:
patterns:
- "*"
update-types:
- "minor"
- "patch"

# npm dependencies
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "daily" # Check more frequently for frontend
versioning-strategy: increase # Update lockfile for security patches

Dependabot workflow:

  1. Dependabot detects vulnerable dependency (e.g., Spring Boot 3.4.x has a published vulnerability)
  2. Creates pull request updating to fixed version (e.g., Spring Boot 3.5.x)
  3. CI runs tests automatically on PR
  4. Developer reviews changes, verifies compatibility, merges PR
  5. Vulnerable dependency is replaced with secure version

OWASP Dependency-Check

OWASP Dependency-Check is a CLI tool that analyzes project dependencies and reports known vulnerabilities. It supports Java, JavaScript, Python, Ruby, PHP, and more.

// build.gradle - OWASP Dependency-Check plugin
plugins {
id 'org.owasp.dependencycheck' version '8.4.3'
}

dependencyCheck {
// Fail build on CVSS score >= 7 (high severity)
failBuildOnCVSS = 7

// Update vulnerability database
autoUpdate = true

// Exclude test dependencies
scanConfigurations = ['compileClasspath', 'runtimeClasspath']

// Suppress false positives
suppressionFile = 'dependency-check-suppressions.xml'

// Output formats
formats = ['HTML', 'JSON', 'JUNIT']
}

// Run check
// ./gradlew dependencyCheckAnalyze
# Run OWASP Dependency-Check from CLI
dependency-check --project "Payment Service" \
--scan ./build/libs \
--out ./reports \
--format HTML \
--failOnCVSS 7

Handling false positives: Dependency scanners sometimes flag dependencies that don't affect your application (e.g., vulnerability in unused code path, or vulnerability already mitigated by your configuration). Suppress false positives with documentation explaining why they're safe.

<!-- dependency-check-suppressions.xml -->
<suppressions>
<suppress>
<notes>CVE-2023-12345 affects only Tomcat standalone mode, not Spring Boot embedded</notes>
<cve>CVE-2023-12345</cve>
</suppress>
<suppress>
<notes>Test dependency only, not included in production</notes>
<gav regex="true">.*:mockito-core:.*</gav>
</suppress>
</suppressions>

Snyk for Continuous Dependency Monitoring

Snyk provides commercial dependency scanning with superior vulnerability databases (faster updates than public databases), fix automation, and license compliance checking.

# Install Snyk CLI
npm install -g snyk

# Authenticate
snyk auth

# Test for vulnerabilities
snyk test # Analyze dependencies

# Test and fix automatically
snyk wizard # Interactive fix process

# Monitor project (continuous monitoring)
snyk monitor # Upload snapshot to Snyk dashboard

# Test Docker images
snyk container test openjdk:21-jdk

# Test Infrastructure as Code
snyk iac test terraform/
# .gitlab-ci.yml - Snyk integration
snyk-scan:
stage: security
image: snyk/snyk:gradle-jdk21
script:
- snyk auth $SNYK_TOKEN
- snyk test --severity-threshold=high --json-file-output=snyk-report.json
- snyk monitor # Upload to Snyk dashboard for continuous monitoring
artifacts:
paths:
- snyk-report.json
when: always
expire_in: 30 days
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
allow_failure: false # Block on high/critical vulnerabilities

Snyk advantages over open-source tools:

  • Faster vulnerability disclosure: Often reports CVEs before public databases
  • Prioritization: Identifies which vulnerabilities are actually exploitable in your code
  • Automated fixes: Generates pull requests with minimal version bumps
  • License compliance: Flags GPL or copyleft licenses incompatible with your policies

Secret Scanning

Accidentally committing secrets (API keys, passwords, private keys, OAuth tokens) to version control is a common and dangerous mistake. Secret scanning detects these exposures before attackers find them.

git-secrets for Pre-Commit Scanning

git-secrets prevents commits containing secrets by scanning changes before they're committed. Install git-secrets as a pre-commit hook to block accidental exposure.

# Install git-secrets
# macOS: brew install git-secrets
# Linux: https://github.com/awslabs/git-secrets#installing-git-secrets

# Set up git-secrets for repository
cd /path/to/repo
git secrets --install
git secrets --register-aws # Add AWS secret patterns

# Add custom patterns
git secrets --add 'password\s*=\s*.+'
git secrets --add 'api[_-]?key\s*=\s*.+'
git secrets --add 'sk_live_[0-9a-zA-Z]{24}' # Stripe secret key
git secrets --add 'ghp_[0-9a-zA-Z]{36}' # GitHub personal access token

# Scan entire repository history
git secrets --scan-history

When a developer attempts to commit a secret, git-secrets blocks the commit:

$ git commit -m "Add payment configuration"

payment-config.yml:12:api_key = sk_live_abc123xyz789secretkey

[ERROR] Matched one or more prohibited patterns

Possible mitigations:
- Mark false positives as allowed using: git config --add secrets.allowed ...
- List your configured patterns: git config --get-all secrets.patterns
- List your configured allowed patterns: git config --get-all secrets.allowed

TruffleHog for Historical Secret Scanning

TruffleHog scans git history to find secrets in past commits. Unlike git-secrets which prevents new secrets, TruffleHog finds secrets already committed that must be rotated.

# Install TruffleHog
pip install truffleHog

# Scan repository
trufflehog git https://github.com/company/payment-service

# Scan with entropy detection (high-entropy strings likely secrets)
trufflehog --regex --entropy=True git https://github.com/company/payment-service

# Scan specific branch
trufflehog --branch=main git https://github.com/company/payment-service

# Output JSON for automation
trufflehog --json git https://github.com/company/payment-service > secrets-report.json
# .gitlab-ci.yml - TruffleHog secret scanning
trufflehog-scan:
stage: security
image: python:3.11
script:
- pip install truffleHog
- trufflehog --regex --entropy=True --json git file://$(pwd) > trufflehog-report.json
- |
if [ -s trufflehog-report.json ]; then
echo "Secrets detected!"
cat trufflehog-report.json
exit 1
fi
artifacts:
paths:
- trufflehog-report.json
when: always
expire_in: 30 days
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

If secrets are found in history:

  1. Rotate immediately: Assume exposed secrets are compromised. Revoke and generate new credentials.
  2. Remove from history: Use git filter-branch or BFG Repo-Cleaner to remove secrets from all commits.
  3. Force push: Rewrite history on remote (requires coordination with team).
  4. Audit access logs: Check if exposed credentials were used by attackers.
# Remove secrets from history with BFG Repo-Cleaner
java -jar bfg.jar --replace-text secrets.txt repo.git
cd repo.git
git reflog expire --expire=now --all
git gc --prune=now --aggressive
git push --force

GitHub Secret Scanning

GitHub automatically scans public repositories for exposed secrets and notifies repository owners. For private repositories, enable secret scanning in repository settings (requires GitHub Advanced Security for organizations).

When GitHub detects a secret:

  1. GitHub notifies repository administrators
  2. GitHub notifies the secret's issuer (e.g., AWS, Stripe, Slack)
  3. Issuer may automatically revoke the exposed credential
  4. Developer must rotate the secret and remove it from history

Supported secret types: GitHub scans for 200+ token patterns including AWS keys, Azure tokens, Google Cloud keys, Stripe API keys, Slack tokens, database connection strings, and private SSH keys.


Integration Security Tests

Automated tools (SAST, DAST) find many vulnerabilities, but you also need integration tests that verify your specific security controls work correctly. These tests verify authentication flows, authorization rules, and input validation as implemented in your application.

Authentication Tests

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureMockMvc
class AuthenticationSecurityTest {

@Autowired
private MockMvc mockMvc;

@Autowired
private UserRepository userRepository;

@Test
void shouldReturn401WhenNotAuthenticated() throws Exception {
mockMvc.perform(get("/api/payments"))
.andExpect(status().isUnauthorized());
}

@Test
void shouldReturn403WhenTokenExpired() throws Exception {
String expiredToken = generateExpiredToken();

mockMvc.perform(get("/api/payments")
.header("Authorization", "Bearer " + expiredToken))
.andExpect(status().isUnauthorized())
.andExpect(jsonPath("$.error").value("Token expired"));
}

@Test
void shouldRejectInvalidToken() throws Exception {
String invalidToken = "invalid.jwt.token";

mockMvc.perform(get("/api/payments")
.header("Authorization", "Bearer " + invalidToken))
.andExpect(status().isUnauthorized())
.andExpect(jsonPath("$.error").value("Invalid token"));
}

@Test
void shouldRejectTamperedToken() throws Exception {
String validToken = generateValidToken();
String tamperedToken = validToken.substring(0, validToken.length() - 5) + "AAAAA";

mockMvc.perform(get("/api/payments")
.header("Authorization", "Bearer " + tamperedToken))
.andExpect(status().isUnauthorized());
}

@Test
@WithMockUser(username = "[email protected]", roles = "CUSTOMER")
void shouldAllowValidAuthentication() throws Exception {
mockMvc.perform(get("/api/payments"))
.andExpect(status().isOk());
}

@Test
void shouldLockAccountAfterMultipleFailedAttempts() throws Exception {
String username = "[email protected]";
String wrongPassword = "wrong-password";

// Attempt login 5 times with wrong password
for (int i = 0; i < 5; i++) {
mockMvc.perform(post("/api/auth/login")
.contentType(MediaType.APPLICATION_JSON)
.content(String.format(
"{\"username\":\"%s\",\"password\":\"%s\"}",
username, wrongPassword
)))
.andExpect(status().isUnauthorized());
}

// Verify account is locked
User user = userRepository.findByEmail(username).orElseThrow();
assertThat(user.isLocked()).isTrue();

// Even correct password should fail
mockMvc.perform(post("/api/auth/login")
.contentType(MediaType.APPLICATION_JSON)
.content(String.format(
"{\"username\":\"%s\",\"password\":\"%s\"}",
username, "correct-password"
)))
.andExpect(status().isUnauthorized())
.andExpect(jsonPath("$.error").value("Account locked"));
}

@Test
void shouldEnforceMfaForHighValueTransactions() throws Exception {
String token = authenticateUser("[email protected]");

PaymentRequest request = new PaymentRequest();
request.setAmount(new BigDecimal("15000")); // Above MFA threshold

mockMvc.perform(post("/api/payments")
.header("Authorization", "Bearer " + token)
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isForbidden())
.andExpect(jsonPath("$.error").value("MFA required"));
}
}

These tests verify that authentication controls work as implemented - not just that the code compiles or that JWT libraries exist, but that expired tokens are rejected, tampered tokens are detected, accounts lock after failed attempts, and MFA is enforced when required. SAST tools can't verify these behaviors because they don't execute code. DAST tools might find some issues but can't systematically test all scenarios. Integration tests provide targeted verification of security requirements.

Authorization Tests

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureMockMvc
class AuthorizationSecurityTest {

@Autowired
private MockMvc mockMvc;

@Autowired
private PaymentService paymentService;

@Test
@WithMockUser(roles = "CUSTOMER")
void customerCanViewOwnPayment() throws Exception {
UUID paymentId = createPaymentForCurrentUser();

mockMvc.perform(get("/api/payments/" + paymentId))
.andExpect(status().isOk())
.andExpect(jsonPath("$.id").value(paymentId.toString()));
}

@Test
@WithMockUser(roles = "CUSTOMER")
void customerCannotViewOtherUserPayment() throws Exception {
UUID otherUserPaymentId = createPaymentForUser("[email protected]");

mockMvc.perform(get("/api/payments/" + otherUserPaymentId))
.andExpect(status().isForbidden())
.andExpect(jsonPath("$.error").value("Cannot access payment"));
}

@Test
@WithMockUser(roles = "CUSTOMER")
void customerCannotDeletePayment() throws Exception {
UUID paymentId = createPaymentForCurrentUser();

mockMvc.perform(delete("/api/payments/" + paymentId))
.andExpect(status().isForbidden());
}

@Test
@WithMockUser(roles = "OPERATOR")
void operatorCanCreatePayment() throws Exception {
PaymentRequest request = PaymentRequest.builder()
.amount(new BigDecimal("1000"))
.currency("USD")
.build();

mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isCreated());
}

@Test
@WithMockUser(roles = "OPERATOR")
void operatorCannotExceedAuthorizationLimit() throws Exception {
PaymentRequest request = PaymentRequest.builder()
.amount(new BigDecimal("100000")) // Exceeds operator limit
.currency("USD")
.build();

mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isForbidden())
.andExpect(jsonPath("$.error").value("Amount exceeds authorization limit"));
}

@Test
@WithMockUser(roles = "ADMIN")
void adminCanDeletePayment() throws Exception {
UUID paymentId = createPaymentForUser("[email protected]");

mockMvc.perform(delete("/api/payments/" + paymentId))
.andExpect(status().isNoContent());
}

@Test
@WithMockUser(roles = "ADMIN")
void adminCannotDeleteCompletedPayment() throws Exception {
UUID paymentId = createCompletedPayment();

mockMvc.perform(delete("/api/payments/" + paymentId))
.andExpect(status().isBadRequest())
.andExpect(jsonPath("$.error").value("Cannot delete completed payment"));
}

@Test
void unauthenticatedUserCannotAccessApi() throws Exception {
mockMvc.perform(get("/api/payments"))
.andExpect(status().isUnauthorized());

mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content("{}"))
.andExpect(status().isUnauthorized());
}

@Test
@WithMockUser(roles = "CUSTOMER")
void preventInsecureDirectObjectReference() throws Exception {
// Create payments for two different users
UUID user1PaymentId = createPaymentForUser("[email protected]");
UUID user2PaymentId = createPaymentForUser("[email protected]");

// Authenticate as user1
String user1Token = authenticateUser("[email protected]");

// Try to access user2's payment (IDOR attack)
mockMvc.perform(get("/api/payments/" + user2PaymentId)
.header("Authorization", "Bearer " + user1Token))
.andExpect(status().isForbidden());
}
}

Authorization tests verify role-based access control (RBAC) works correctly - customers can access only their own data, operators have transaction limits, admins have elevated privileges but business rules still apply. These tests catch Insecure Direct Object Reference (IDOR) vulnerabilities where users can manipulate IDs to access other users' data. DAST tools might find IDOR through fuzzing, but systematic testing of all roles and permission combinations requires integration tests.

Input Validation Tests

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureMockMvc
class InputValidationSecurityTest {

@Autowired
private MockMvc mockMvc;

@Test
@WithMockUser(roles = "OPERATOR")
void shouldRejectSqlInjectionAttempt() throws Exception {
String sqlInjection = "'; DROP TABLE payments; --";

mockMvc.perform(get("/api/payments")
.param("status", sqlInjection))
.andExpect(status().isBadRequest())
.andExpect(jsonPath("$.validationErrors.status")
.value("Invalid status format"));
}

@Test
@WithMockUser(roles = "OPERATOR")
void shouldRejectXssAttempt() throws Exception {
String xssPayload = "<script>alert('XSS')</script>";

PaymentRequest request = new PaymentRequest();
request.setAmount(new BigDecimal("100"));
request.setDescription(xssPayload);

mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isCreated());

// Verify XSS payload was sanitized
Payment payment = paymentService.getLatestPayment();
assertThat(payment.getDescription()).doesNotContain("<script>");
assertThat(payment.getDescription()).doesNotContain("alert");
}

@Test
@WithMockUser(roles = "OPERATOR")
void shouldRejectInvalidAmounts() throws Exception {
List<String> invalidAmounts = List.of(
"-100", // Negative
"0", // Zero
"999999999999", // Too large
"100.123", // Too many decimals
"abc" // Not a number
);

for (String amount : invalidAmounts) {
mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content(String.format(
"{\"amount\":\"%s\",\"currency\":\"USD\"}",
amount
)))
.andExpect(status().isBadRequest());
}
}

@Test
@WithMockUser(roles = "OPERATOR")
void shouldRejectOversizedPayload() throws Exception {
String largeDescription = "A".repeat(10000); // 10KB

PaymentRequest request = new PaymentRequest();
request.setAmount(new BigDecimal("100"));
request.setDescription(largeDescription);

mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isBadRequest())
.andExpect(jsonPath("$.validationErrors.description")
.value("Description must not exceed 500 characters"));
}

@Test
@WithMockUser(roles = "OPERATOR")
void shouldEnforceCsrfProtection() throws Exception {
// POST without CSRF token should fail
mockMvc.perform(post("/api/payments")
.contentType(MediaType.APPLICATION_JSON)
.content("{}"))
.andExpect(status().isForbidden());
}
}

Input validation tests verify that malicious inputs are rejected or sanitized before processing. They test SQL injection payloads are blocked, XSS attempts are neutralized, invalid data formats are rejected, and payload size limits are enforced. While SAST tools can detect missing input validation in code, only running tests can verify that validation actually works correctly when the application processes requests.


Security Monitoring

Integration tests verify security controls work at deployment time, but runtime monitoring detects attacks happening in production. Security monitoring tracks authentication failures, access denials, suspicious patterns, and anomalous behavior to alert security teams when attacks occur.

Real-Time Security Monitoring

Effective security monitoring requires threshold tuning to balance detection and noise. For example, one failed login is normal, but 10 attempts in 5 minutes suggests brute force. A user accessing 10 payments is normal, but 1000 in 5 minutes suggests exfiltration. Monitoring tracks patterns over time windows and triggers alerts when thresholds are exceeded. All events are logged for forensic analysis regardless of thresholds.

Security Event Monitoring

@Service
public class SecurityMonitoringService {

@Autowired
private SecurityAlertService alertService;

@Autowired
private AuditLogRepository auditLogRepository;

/**
* Monitor for suspicious authentication patterns
*/
@Scheduled(fixedRate = 60000) // Every minute
public void monitorAuthenticationFailures() {
LocalDateTime since = LocalDateTime.now().minusMinutes(5);

// Check for brute force attempts
List<AuthFailure> failures = auditLogRepository
.findAuthenticationFailures(since);

Map<String, Long> failuresByIp = failures.stream()
.collect(Collectors.groupingBy(
AuthFailure::getIpAddress,
Collectors.counting()
));

failuresByIp.forEach((ip, count) -> {
if (count >= 10) {
alertService.sendAlert(SecurityAlert.builder()
.severity("HIGH")
.type("BRUTE_FORCE_ATTEMPT")
.ipAddress(ip)
.message(String.format(
"Multiple failed login attempts from IP: %s (%d attempts)",
ip, count
))
.build());

// Automatically block IP
firewallService.blockIpAddress(ip, Duration.ofHours(24));
}
});
}

/**
* Monitor for data exfiltration attempts
*/
@Scheduled(fixedRate = 300000) // Every 5 minutes
public void monitorDataAccess() {
LocalDateTime since = LocalDateTime.now().minusMinutes(5);

// Check for excessive data access
List<DataAccessEvent> events = auditLogRepository
.findDataAccessEvents(since);

Map<UUID, Long> accessesByUser = events.stream()
.collect(Collectors.groupingBy(
DataAccessEvent::getUserId,
Collectors.counting()
));

accessesByUser.forEach((userId, count) -> {
if (count >= 1000) {
alertService.sendAlert(SecurityAlert.builder()
.severity("CRITICAL")
.type("POTENTIAL_DATA_EXFILTRATION")
.userId(userId)
.message(String.format(
"Excessive data access by user: %s (%d records)",
userId, count
))
.build());

// Temporarily suspend account
userService.suspendAccount(userId, "Suspicious activity detected");
}
});
}

/**
* Monitor for privilege escalation attempts
*/
@EventListener
public void onAccessDenied(AccessDeniedEvent event) {
// Log the attempt
auditLogRepository.save(AuditLog.builder()
.userId(event.getUserId())
.action(event.getAction())
.resource(event.getResource())
.result("ACCESS_DENIED")
.timestamp(LocalDateTime.now())
.build());

// Check for repeated attempts
long recentDenials = auditLogRepository.countRecentAccessDenials(
event.getUserId(),
LocalDateTime.now().minusMinutes(15)
);

if (recentDenials >= 5) {
alertService.sendAlert(SecurityAlert.builder()
.severity("HIGH")
.type("PRIVILEGE_ESCALATION_ATTEMPT")
.userId(event.getUserId())
.message(String.format(
"Multiple access denied events for user: %s",
event.getUserId()
))
.build());
}
}
}

Security monitoring complements security testing by detecting attacks in production that testing might miss. Testing validates that controls work correctly under normal conditions, but monitoring catches real attacks attempting to circumvent those controls. Together they provide comprehensive security coverage - testing ensures defenses work, monitoring ensures defenses detect real threats.


Penetration Testing

Penetration testing (pen testing) involves security experts manually testing your application to find vulnerabilities automated tools miss. Pen testers think like attackers, chaining vulnerabilities creatively and exploiting business logic flaws that automated scanners can't understand.

Penetration Testing Process

Scoping: Define what's in scope (production APIs, staging environments, mobile apps) and out of scope (customer data, production databases). Establish rules of engagement - what testing methods are permitted, when testing occurs, who to contact for issues. Clear scope prevents misunderstandings and accidental damage.

Reconnaissance: Pen testers gather information about your systems - domain names, IP addresses, technologies used, employee names, public code repositories. This mirrors what real attackers do. More information enables more targeted attacks.

Vulnerability Discovery: Pen testers use automated scanners plus manual testing to find vulnerabilities. They look beyond common web vulnerabilities - testing business logic (can users manipulate prices?), authentication flows (can password reset be bypassed?), and privilege escalation (can normal users access admin functions?).

Exploitation: Pen testers attempt to exploit discovered vulnerabilities to demonstrate real-world impact. They show that SQL injection doesn't just exist - it allows extracting customer data. They show that XSS doesn't just pop an alert box - it enables account takeover.

Reporting: Pen test reports document findings with severity ratings, technical details, proof-of-concept exploits, and remediation guidance. Executive summaries communicate risk to leadership. Technical sections provide developers with reproduction steps and fix recommendations.

When to Perform Penetration Testing

Pre-release pen testing: Test new applications before production launch to find critical vulnerabilities. Budget 2-4 weeks for testing plus 2-4 weeks for fixing discovered issues. Schedule pen testing after feature complete but before launch deadline.

Annual pen testing: Test production applications annually to find new vulnerabilities introduced by code changes, dependency updates, or new attack techniques. Compliance frameworks (PCI DSS, SOC 2, ISO 27001) often require annual pen testing.

Post-incident pen testing: After security incidents, pen test to verify fixes and identify what attackers might have accessed. Don't assume you found everything - professional pen testers often find additional vulnerabilities attackers might have missed.

Regulatory pen testing: Some industries require pen testing at specific intervals or after significant changes. PCI DSS requires annual external pen testing and pen testing after significant infrastructure or application changes. Understand your regulatory requirements.

Internal vs External Penetration Testing

External pen testing (black box): Pen testers have no inside knowledge - they attack from the internet like external attackers. This tests perimeter defenses, public-facing applications, and whether information leakage aids attackers. Realistic but may miss internal vulnerabilities.

Internal pen testing (gray box): Pen testers have some credentials or documentation - they attack from a compromised employee perspective. This tests what happens after initial breach - lateral movement, privilege escalation, data access. Reveals whether insider threats can be detected and contained.

White box pen testing: Pen testers have full access to source code, architecture documentation, and credentials. They find vulnerabilities systematically using code review and architectural analysis. Most thorough but least realistic of attacker perspective.

Most organizations need both external and internal pen testing. External testing validates perimeter security; internal testing validates defense-in-depth.

Remediation and Verification

Pen test reports are only valuable if you fix the issues. Prioritize by severity and exploitability:

  • Critical: Exploitable remotely, leads to data breach or system compromise. Fix immediately (within days).
  • High: Significant security impact, requires specific conditions. Fix within weeks.
  • Medium: Moderate impact, difficult to exploit or limited scope. Fix within months.
  • Low: Minimal impact or theoretical issues. Fix when convenient.

After fixing issues, request pen testers verify fixes. Verification confirms that patches are effective and didn't introduce new vulnerabilities. Include verification in original pen test engagement to avoid rework costs.


Threat Modeling

Threat modeling is a proactive security practice where teams systematically identify, categorize, and prioritize potential threats before building features. It answers: "What can go wrong with this system, and how do we prevent it?"

STRIDE Threat Framework

STRIDE categorizes threats into six types. Use STRIDE to systematically consider what could go wrong with each component of your system.

Spoofing: Attacker pretends to be someone else - impersonating users, services, or systems. Mitigate with strong authentication (MFA, certificates), service-to-service authentication (mutual TLS, API keys), and session management.

Tampering: Attacker modifies data in transit or at rest - altering transaction amounts, changing user permissions, manipulating audit logs. Mitigate with input validation, authorization checks before data changes, integrity checks (checksums, digital signatures), and database constraints.

Repudiation: User denies performing an action and you can't prove they did. Mitigate with comprehensive audit logging capturing who did what when, digital signatures on critical operations, and immutable audit trails.

Information Disclosure: Sensitive data is exposed to unauthorized parties - customer PII, credentials, business secrets. Mitigate with encryption (TLS in transit, at rest for sensitive data), access controls, data classification, and avoiding information leakage in error messages.

Denial of Service: Attacker disrupts availability - overwhelming APIs with traffic, exhausting resources, exploiting inefficient code. Mitigate with rate limiting, resource quotas, efficient algorithms, auto-scaling, and DDoS protection (Cloudflare, AWS Shield).

Elevation of Privilege: Attacker gains higher privileges than intended - normal user accesses admin functions, SQL injection leads to database admin access. Mitigate with least privilege principles, authorization checks on all sensitive operations, input validation preventing injection attacks, and secure coding practices.

Threat Modeling Process

1. Decompose the System: Create a diagram showing components (frontend, backend, database, external services), data flows, trust boundaries, and entry points. Data Flow Diagrams (DFD) work well - they show how data moves through your system and where security controls apply.

2. Identify Threats: For each component and data flow, apply STRIDE. What spoofing attacks could affect the web application? What tampering attacks could affect data in transit to the payment gateway? What information disclosure risks exist in the database?

3. Rate Threats: Prioritize threats by likelihood and impact. Use a simple risk matrix:

  • High Likelihood + High Impact: Critical, address immediately
  • High Likelihood + Low Impact or Low Likelihood + High Impact: Important, address soon
  • Low Likelihood + Low Impact: Monitor, address if risk changes

4. Mitigate Threats: For each high-priority threat, identify countermeasures. Document whether you'll implement the mitigation, accept the risk, or transfer the risk (insurance, third-party service).

5. Validate Mitigations: After implementation, verify mitigations work through security testing, code review, and pen testing.

Attack Trees

Attack trees visualize how attackers might achieve a goal by breaking it down into steps. They help understand attack paths and where security controls should focus.

Attack trees show that preventing any leaf node prevents that entire attack path. If you prevent SQL injection, attackers must try different approaches. If you prevent all leaves under "Exploit Application Vulnerability," attackers must shift to compromising the database or social engineering.

When to Threat Model

New features: Threat model during technical design phase before implementation starts. Identifying threats early allows designing secure architecture. Retrofitting security is more expensive than building it in.

Architecture changes: Threat model when adding new components (message queue, cache layer, external service integration), changing trust boundaries (opening API to partners), or adopting new technologies (migrating to microservices).

Regulatory requirements: Some standards require threat modeling. PCI DSS requires identifying threats to cardholder data. GDPR requires data protection by design, which threat modeling demonstrates.

Periodic reviews: Review threat models annually or when threat landscape changes (new attack techniques, vulnerabilities in similar systems). Threat models become outdated as systems evolve - keep them current.


Incident Response

Security testing and monitoring detect attacks, but incident response handles what happens when attacks succeed. Having documented incident response procedures ensures your team responds effectively under pressure rather than improvising during crises.

Incident Response Process

The incident response process must be followed systematically - skipping steps can worsen the incident. During Detection, focus on accurate identification and avoid jumping to conclusions. Assessment determines severity and scope - over-reacting to minor issues wastes resources, while under-reacting to critical breaches leads to regulatory penalties. Containment stops the attack's spread without destroying evidence (don't simply delete files or restart servers - preserve them for forensics). Eradication removes the threat and patches vulnerabilities to prevent re-entry. Recovery validates that systems are clean before returning to production. Post-Incident Review documents lessons learned and improves defenses - many breaches exploit the same vulnerability twice because organizations fail to learn from the first incident. Having a documented incident response plan and practicing it regularly through tabletop exercises ensures your team can respond effectively under pressure.

Incident Severity Levels

SeverityDescriptionResponse TimeExamplesActions
CriticalData breach, system compromiseImmediateCustomer data exposed, ransomwareNotify CISO, legal, PR
HighPotential breach, authentication bypass<1 hourPrivilege escalation vulnerabilityEmergency patch, notify security team
MediumLimited impact vulnerability<24 hoursXSS on non-critical pageSchedule patch, monitor
LowMinor security issue<1 weekOutdated dependency, no exploitPlan update, add to backlog

Incident Response Implementation

@Service
public class IncidentResponseService {

@Autowired
private NotificationService notificationService;

@Autowired
private AuditLogService auditLogService;

public void handleSecurityIncident(SecurityIncident incident) {
// 1. Detection and Logging
log.error("Security incident detected: {}", incident);
auditLogService.logSecurityIncident(incident);

// 2. Assessment
IncidentSeverity severity = assessSeverity(incident);
incident.setSeverity(severity);

// 3. Notification
notifySecurityTeam(incident);
if (severity == IncidentSeverity.CRITICAL) {
notifyCiso(incident);
notifyLegalTeam(incident);
}

// 4. Containment
containThreat(incident);

// 5. Evidence Collection
collectEvidence(incident);

// 6. Eradication
eradicateThreat(incident);

// 7. Recovery
recoverSystems(incident);

// 8. Post-Incident Review
schedulePostIncidentReview(incident);
}

private void containThreat(SecurityIncident incident) {
switch (incident.getType()) {
case DATA_BREACH:
// Revoke all active sessions
sessionService.revokeAllSessions();
// Rotate encryption keys
encryptionService.rotateKeys();
break;

case COMPROMISED_CREDENTIALS:
// Disable affected accounts
userService.disableUser(incident.getUserId());
// Force password reset
authService.forcePasswordReset(incident.getUserId());
break;

case SQL_INJECTION_ATTEMPT:
// Block attacking IP
firewallService.blockIpAddress(incident.getSourceIp());
// Enable WAF rules
wafService.enableStrictMode();
break;

case BRUTE_FORCE_ATTACK:
// Block attacking IPs
incident.getAttackingIps().forEach(
ip -> firewallService.blockIpAddress(ip)
);
// Enable rate limiting
rateLimitService.enableStrictLimits();
break;
}
}

private void collectEvidence(SecurityIncident incident) {
// Preserve logs
List<AuditLog> relevantLogs = auditLogService.getLogsForIncident(
incident.getStartTime(),
incident.getEndTime()
);

// Save to secure storage for forensics
forensicsService.preserveEvidence(incident.getId(), relevantLogs);

// Capture system state
forensicsService.captureSystemSnapshot(incident.getId());
}
}

Breach Notification Requirements

@Service
public class BreachNotificationService {

/**
* GDPR requires breach notification within 72 hours
*/
public void notifyDataBreach(DataBreachIncident breach) {
// 1. Internal notification
notifySecurityTeam(breach);
notifyCiso(breach);
notifyLegalTeam(breach);
notifyPrivacyOfficer(breach);

// 2. Assess notification requirements
boolean affectsEuResidents = breach.affectsEuResidents();
boolean highRisk = breach.getSeverity() == Severity.CRITICAL;

// 3. Regulatory notification (within 72 hours for GDPR)
if (affectsEuResidents) {
scheduleRegulatoryNotification(breach, Duration.ofHours(72));
}

// 4. Customer notification (required for high risk breaches)
if (highRisk) {
scheduleCustomerNotification(breach);
}

// 5. Public disclosure (if required by law)
if (breach.requiresPublicDisclosure()) {
schedulePublicDisclosure(breach);
}
}

private void scheduleRegulatoryNotification(DataBreachIncident breach,
Duration deadline) {
LocalDateTime notificationDeadline = breach.getDetectedAt().plus(deadline);

// Schedule task
taskScheduler.schedule(
() -> notifyRegulator(breach),
notificationDeadline.toInstant(ZoneOffset.UTC)
);

// Set reminder 24 hours before deadline
taskScheduler.schedule(
() -> sendReminderToLegalTeam(breach),
notificationDeadline.minus(Duration.ofHours(24))
.toInstant(ZoneOffset.UTC)
);
}
}

Breach notification requirements vary by jurisdiction (GDPR requires 72 hours, other regulations differ). Understanding your legal obligations and having automated processes for tracking notification deadlines ensures compliance during incident response.


Security Testing in CI/CD Pipelines

Integrate security testing throughout CI/CD to catch vulnerabilities continuously rather than in pre-release security reviews. This "shift left" approach finds issues earlier when they're cheaper to fix.

# .gitlab-ci.yml - Comprehensive security pipeline
stages:
- build
- test
- security
- deploy

build:
stage: build
image: gradle:8-jdk25
script:
- ./gradlew build

# Fast security checks - run on every MR
sast-quick:
stage: security
image: returntocorp/semgrep
script:
- semgrep --config=auto --error .
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

secret-scan:
stage: security
image: python:3.11
script:
- pip install truffleHog
- trufflehog --regex --entropy=True git file://$(pwd)
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

eslint-security:
stage: security
image: node:22
script:
- npm ci
- npm run lint:security
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

# Comprehensive security checks - run on main branch
dependency-scan:
stage: security
image: gradle:8-jdk25
script:
- ./gradlew dependencyCheckAnalyze
artifacts:
paths:
- build/reports/dependency-check-report.html
expire_in: 30 days
rules:
- if: '$CI_COMMIT_BRANCH == "main"'

sonarqube:
stage: security
image: gradle:8-jdk25
script:
- ./gradlew sonarqube -Dsonar.qualitygate.wait=true
rules:
- if: '$CI_COMMIT_BRANCH == "main"'

# Expensive security checks - run nightly
zap-dast:
stage: security
image: owasp/zap2docker-stable
script:
- ./gradlew bootRun &
- sleep 60
- zap-baseline.py -t http://localhost:8080 -r zap-report.html
artifacts:
paths:
- zap-report.html
expire_in: 90 days
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
allow_failure: true

container-scan:
stage: security
image: aquasec/trivy
script:
- trivy image --severity HIGH,CRITICAL payment-service:$CI_COMMIT_SHA
rules:
- if: '$CI_COMMIT_BRANCH == "main"'

Tiered security testing:

  • Every commit: Fast SAST (Semgrep), secret scanning (seconds to minutes)
  • Main branch: Comprehensive SAST (SonarQube), dependency scanning (minutes)
  • Nightly: DAST (ZAP), container scanning, license compliance (hours)
  • Pre-release: Penetration testing, threat model review (days to weeks)

This balances comprehensive coverage with developer productivity. Fast checks provide immediate feedback. Comprehensive checks catch issues before production without blocking every commit.

For stronger supply-chain security, pin CI images by immutable digest in production pipelines (not only floating tags like node:22). This prevents unexpected base image drift between runs and improves reproducibility during incident investigations.


External Resources:


Summary

Key Takeaways:

  1. Layered Security Testing: Use SAST, DAST, dependency scanning, secret scanning, integration tests, penetration testing, and threat modeling together for comprehensive coverage
  2. Shift Left Security: Integrate security testing early in development with fast automated checks on every commit
  3. SAST (Static Analysis): Use SonarQube, Semgrep, ESLint plugins to find code vulnerabilities before execution
  4. DAST (Dynamic Testing): Use OWASP ZAP to test running applications from attacker perspective, finding runtime misconfigurations
  5. Dependency Scanning: Use Dependabot, Snyk, or OWASP Dependency-Check to identify and fix vulnerable third-party libraries
  6. Secret Scanning: Use git-secrets, TruffleHog, GitHub secret scanning to prevent credential exposure in version control
  7. Integration Tests: Write tests verifying authentication, authorization, and input validation work correctly in your application
  8. Security Monitoring: Implement real-time monitoring for brute force, data exfiltration, and privilege escalation attempts
  9. Incident Response: Have documented procedures for detecting, containing, eradicating, recovering from, and learning from security incidents
  10. Penetration Testing: Professional pen testing finds complex vulnerabilities and business logic flaws automated tools miss
  11. Threat Modeling: Use STRIDE framework to identify and mitigate threats during design phase
  12. CI/CD Integration: Automate security testing in pipelines with tiered approach balancing speed and thoroughness
  13. Continuous Improvement: Learn from security findings, fix issues promptly, verify fixes work, and update defenses based on lessons learned
  14. Defense in Depth: No single security measure is sufficient - layer multiple controls for comprehensive protection
Security Testing Checklist
  • [Good] SAST: SonarQube/Semgrep running on every commit
  • [Good] DAST: OWASP ZAP running weekly or on main branch
  • [Good] Dependency Scanning: Dependabot/Snyk monitoring dependencies continuously
  • [Good] Secret Scanning: git-secrets preventing commits, TruffleHog scanning history
  • [Good] Integration Tests: Authentication, authorization, input validation tests in test suite
  • [Good] Security Monitoring: Real-time alerts for brute force, data exfiltration, privilege escalation
  • [Good] Incident Response Plan: Documented procedures, severity levels, notification requirements
  • [Good] Penetration Testing: Annual pen testing scheduled with remediation plan
  • [Good] Threat Modeling: Threat models for new features and architecture changes
  • [Good] CI/CD Integration: Automated security checks in pipeline at appropriate frequencies