Chapter I
Authentication & Authorization
4 Controls
01
Broken Object Level Authorization (BOLA / IDOR)
⌄
⚠ Issue
APIs expose endpoints that accept object IDs (user IDs, order IDs, document IDs) without verifying that the requesting user owns or has permission to access that specific object. Attackers simply increment or swap IDs to access other users' data.
🔥 Risk
Horizontal privilege escalation — any authenticated user can read, modify, or delete any other user's data by changing the object ID in the request URL or body. The most common critical API vulnerability.
📁 Files & Paths
src/middleware/ownership.js
src/routes/api.js
src/models/*.js
🔧 Fix — Ownership middleware (Express / Node.js)
src/middleware/ownership.js — Reusable ownership check
/** * Ownership Middleware — verify the requesting user owns the resource. * Use on EVERY endpoint that accepts a resource ID parameter. */ const Order = require('../models/Order'); const Document = require('../models/Document'); // Generic ownership factory — pass the model + owner field const requireOwnership = (Model, ownerField = 'userId') => { return async (req, res, next) => { const resourceId = req.params.id || req.params.orderId || req.params.documentId; if (!resourceId) return res.status(400).json({ error: 'Resource ID required' }); const resource = await Model.findById(resourceId).select(ownerField).lean(); // ✗ NEVER: trust the client-supplied userId in the request body // ✓ ALWAYS: compare against authenticated user from the JWT/session if (!resource || resource[ownerField].toString() !== req.user.id) { // Return 404 (not 403) — don't reveal the resource exists return res.status(404).json({ error: 'Resource not found' }); } req.resource = resource; // attach for downstream use next(); }; }; module.exports = { requireOwnership }; // ─── Usage in routes ────────────────────────────────────────── // router.get('/orders/:id', authenticate, requireOwnership(Order), getOrder); // router.delete('/documents/:id', authenticate, requireOwnership(Document), deleteDocument); // router.put('/invoices/:id', authenticate, requireOwnership(Invoice), updateInvoice);
src/routes/api.js — Correct vs Wrong patterns
// ✗ VULNERABLE: No ownership check — any user can access any order app.get('/api/orders/:id', authenticate, async (req, res) => { const order = await Order.findById(req.params.id); res.json(order); }); // ✓ SECURE: Ownership middleware validates user == resource owner app.get('/api/orders/:id', authenticate, requireOwnership(Order), async (req, res) => { const order = await Order.findById(req.params.id); res.json(order); } ); // ✓ DB-Level: Filter by userId in EVERY query (defense-in-depth) const order = await Order.findOne({ _id: req.params.id, userId: req.user.id // ALWAYS add this filter }); // ✓ For UUIDs over sequential IDs (prevents enumeration) // Use UUIDs: /api/orders/550e8400-e29b-41d4-a716-446655440000 // Not sequential: /api/orders/1001 (trivially predictable)
👥 Responsible
Backend DeveloperSecurity Review
🔑 OWASP Reference
API1:2023 — Broken Object Level Authorization. The most prevalent and impactful API vulnerability. Affects every API that accepts resource IDs without ownership validation.
✅ Verification
curl -H "Authorization: Bearer USER_A_TOKEN" https://api.example.com/api/orders/USER_B_ORDER_ID
HTTP 404 Not Found ← Correct (not 200 or 403)
curl -H "Authorization: Bearer USER_A_TOKEN" https://api.example.com/api/orders/USER_A_ORDER_ID
HTTP 200 OK ← Own resource accessible
# Automated test: swap IDs across 2 test accounts → all cross-user 404
for id in $(get-user-b-ids); do curl -sI -H "Auth: Bearer $USER_A" /api/orders/$id | head -1; done
→ HTTP/2 404 (all responses)
📋 VAPT Closure Statement
🔒
Audit Closure — API1:2023 BOLA
All API endpoints accepting resource identifiers (order IDs, document IDs, user IDs) now enforce server-side ownership validation via a centralized requireOwnership middleware. Database queries include a mandatory userId filter as defense-in-depth. Object IDs have been migrated from sequential integers to UUIDs to prevent enumeration. Cross-user resource access returns HTTP 404 to avoid confirming resource existence. Verified through automated cross-account test suite — all unauthorized cross-user requests return 404.
02
Broken Authentication — Endpoints & Credential Security
⌄
⚠ Issue
Authentication mechanisms are poorly implemented — weak password hashing (MD5/SHA1), no brute-force protection on login endpoints, weak credential recovery flows, missing authentication on sensitive endpoints, and API keys transmitted in URLs (logged in server logs).
🔥 Risk
Account takeover at scale via credential stuffing, brute-force against unprotected auth endpoints, API key leakage via URL logging, and admin escalation via weak token validation.
🔧 Fix — Secure password hashing + authentication hardening
src/services/auth.service.js — Hardened authentication
const bcrypt = require('bcrypt'); const crypto = require('crypto'); // ─── Password hashing ───────────────────────────────────────── const SALT_ROUNDS = 14; // ≥12 in prod (12=~300ms, 14=~1.2s per hash) async function hashPassword(plain) { // ✗ NEVER: MD5, SHA1, SHA256 — not for passwords // ✓ ALWAYS: bcrypt (cost 12+) or Argon2id return bcrypt.hash(plain, SALT_ROUNDS); } async function verifyPassword(plain, hash) { // Use timing-safe comparison (bcrypt.compare is already timing-safe) return bcrypt.compare(plain, hash); } // ─── API Key generation ─────────────────────────────────────── function generateApiKey() { // Prefix for easy rotation detection + 32 random bytes const prefix = 'sk_live_'; const secret = crypto.randomBytes(32).toString('hex'); return prefix + secret; } // Store only hash of API key — NEVER store plaintext async function storeApiKey(rawKey) { const keyHash = crypto.createHash('sha256').update(rawKey).digest('hex'); return keyHash; // save this to DB } // ─── Timing-safe string comparison (prevents timing attacks) ─ function safeCompare(a, b) { if (a.length !== b.length) return false; return crypto.timingSafeEqual(Buffer.from(a), Buffer.from(b)); }
src/middleware/apiKeyAuth.js — API key authentication
// ─── API Key transport: ALWAYS header, NEVER URL query param ─ const apiKeyAuth = async (req, res, next) => { // ✓ Accept from Authorization header (Bearer) or X-API-Key header const rawKey = req.headers['x-api-key'] || req.headers['authorization']?.replace('Bearer ', ''); // ✗ NEVER: const key = req.query.api_key — gets logged in access logs! if (!rawKey) { return res.status(401).json({ error: 'API key required' }); } const keyHash = crypto.createHash('sha256').update(rawKey).digest('hex'); const apiKey = await ApiKey.findOne({ keyHash, active: true }); if (!apiKey) return res.status(401).json({ error: 'Invalid or revoked API key' }); // Update last-used timestamp for audit trail apiKey.lastUsed = new Date(); apiKey.save(); req.apiKey = apiKey; next(); };
👥 Responsible
Backend Developer
🔒 Authentication Checklist
- Use bcrypt (cost ≥12) or Argon2id for passwords
- API keys transmitted in headers only (never URLs)
- Store only SHA-256 hash of API keys in DB
- Rate-limit auth endpoints (5 attempts / 15 min)
- Never return same error for wrong user vs wrong password
- Implement account lockout after repeated failures
- Use timing-safe comparison for all credential checks
✅ Verification
curl -X POST https://api.example.com/auth/login -d '{"email":"a","password":"b"}'
→ HTTP 401 {"error":"Invalid credentials"} ← same msg for wrong user/pass
for i in $(seq 1 10); do curl -s -X POST https://api.example.com/auth/login -d '{"email":"test@x.com","password":"wrong"}'; done
→ After 5 attempts: HTTP 429 {"error":"Too many requests"}
# Verify API key never appears in server access logs
grep "sk_live_" /var/log/nginx/access.log
→ (empty — keys are in headers, not URLs)
📋 VAPT Closure Statement
🔒
Audit Closure — API2:2023 Broken Authentication
Password hashing migrated to bcrypt with cost factor 14 (~1.2 seconds per hash). API keys are 256-bit cryptographically random values stored as SHA-256 hashes — plaintext keys are never persisted. All API key transport uses HTTP headers (never URL query parameters). Authentication endpoint rate limiting enforces a 5-attempt / 15-minute window returning HTTP 429. Timing-safe comparison prevents timing oracle attacks. Generic error messages prevent user enumeration.
03
JWT Security — Algorithm, Expiry & Claims
⌄
⚠ Issue
JWT vulnerabilities:
alg:none bypass (unauthenticated tokens), HS256 with weak secrets (brute-forceable in minutes), missing exp claim (tokens never expire), no algorithm whitelist in verification, and tokens stored in localStorage (XSS-vulnerable).🔥 Risk
Complete authentication bypass via alg:none, unlimited session via no expiry, token forgery via weak HS256 secret cracking with hashcat, and session theft via XSS from localStorage storage.
🔧 Fix — Secure JWT implementation
src/utils/jwt.js — Production-hardened JWT service
const jwt = require('jsonwebtoken'); const crypto = require('crypto'); // ─── Generate RSA keys (run once, store securely) ───────────── // $ openssl genrsa -out private.pem 4096 // $ openssl rsa -in private.pem -pubout -out public.pem const PRIVATE_KEY = process.env.JWT_PRIVATE_KEY; // PEM format RSA private key const PUBLIC_KEY = process.env.JWT_PUBLIC_KEY; // PEM format RSA public key const ACCESS_TOKEN_TTL = '15m'; // short-lived access token const REFRESH_TOKEN_TTL = '7d'; // long-lived refresh token const JWT_OPTIONS = { algorithm: 'RS256', // NEVER HS256 with shared secrets issuer: 'api.your-app.com', audience: 'your-app.com', }; function signAccessToken(payload) { // Include only necessary claims — NEVER include passwords or PII const claims = { sub: payload.userId, role: payload.role, jti: crypto.randomUUID(), // unique token ID for revocation }; return jwt.sign(claims, PRIVATE_KEY, { ...JWT_OPTIONS, expiresIn: ACCESS_TOKEN_TTL }); } function verifyToken(token) { return jwt.verify(token, PUBLIC_KEY, { algorithms: ['RS256'], // WHITELIST — prevents alg:none and alg confusion issuer: 'api.your-app.com', audience: 'your-app.com', // clockTolerance: 30, // seconds of clock skew allowed }); } // ─── Refresh token: stored in httpOnly cookie ───────────────── function signRefreshToken(userId) { const token = crypto.randomBytes(64).toString('hex'); // Store hash in DB with userId and expiry — NOT the raw token const hash = crypto.createHash('sha256').update(token).digest('hex'); return { token, hash }; } module.exports = { signAccessToken, verifyToken, signRefreshToken };
src/routes/auth.js — Token delivery via httpOnly cookie
// Set access token in httpOnly cookie — NOT in response body res.cookie('access_token', accessToken, { httpOnly: true, // JS cannot access — blocks XSS theft secure: true, // HTTPS only sameSite: 'strict', // CSRF protection maxAge: 15 * 60 * 1000, // 15 minutes path: '/', }); // If returning token in body (SPAs with interceptors), use short TTL // NEVER store in localStorage on the client side
👥 Responsible
Backend Developer
✅ Verification
# Test alg:none bypass — must be rejected
node -e "const jwt=require('jsonwebtoken'); try { jwt.verify('eyJhbGciOiJub25lIn0.eyJzdWIiOiIxMjMifQ.', process.env.JWT_PUBLIC_KEY, {algorithms:['RS256']}); } catch(e) { console.log('BLOCKED:', e.message); }"
→ BLOCKED: invalid algorithm
# Verify token expiry (decode without verify)
node -e "const [,p]=token.split('.'); console.log(JSON.parse(Buffer.from(p,'base64').toString()))"
→ { sub: '...', exp: 1234567890, iat: 1234566990, jti: 'uuid...' }
→ exp - iat = 900 seconds (15 minutes) ✓
📋 VAPT Closure Statement
🔒
Audit Closure — JWT Security
JWT implementation uses RS256 (4096-bit RSA asymmetric signing) with an explicit algorithm whitelist in jwt.verify() preventing alg:none and algorithm confusion attacks. Access tokens expire in 15 minutes with issuer and audience claim validation. The JWT unique identifier (jti) enables token revocation. Tokens are delivered via HttpOnly/Secure/SameSite=Strict cookies, inaccessible to JavaScript. Refresh tokens are opaque random values with only their SHA-256 hash stored in the database.
04
OAuth 2.0 Scope Enforcement & API Key Lifecycle
⌄
⚠ Issue
OAuth scopes not enforced server-side (client claims any scope), API keys never expire or rotate, client secrets stored in source code, authorization codes reusable, and PKCE not implemented for public clients (mobile/SPA).
🔥 Risk
Scope escalation — client requests admin scopes they were not granted, authorization code theft and replay, and indefinite access via leaked API keys that are never rotated or revoked.
🔧 Fix — Server-side scope validation + API key management
src/middleware/scopes.js — OAuth scope enforcement
/** * Scope-based authorization middleware. * Validates that the token's granted scopes include the required scope. * Scopes must ALWAYS be validated server-side — never trust client claims. */ const requireScope = (...requiredScopes) => (req, res, next) => { const tokenScopes = req.auth?.scope?.split(' ') || []; const hasAllScopes = requiredScopes.every(s => tokenScopes.includes(s)); if (!hasAllScopes) { return res.status(403).json({ error: 'insufficient_scope', error_description: `Required scopes: ${requiredScopes.join(', ')}`, required: requiredScopes, granted: tokenScopes, }); } next(); }; // ─── Define granular scopes ─────────────────────────────────── const SCOPES = { READ_ORDERS: 'orders:read', WRITE_ORDERS: 'orders:write', DELETE_ORDERS: 'orders:delete', READ_USERS: 'users:read', ADMIN: 'admin', }; // ─── Usage: apply scopes to routes ─────────────────────────── // router.get('/orders', auth, requireScope('orders:read'), getOrders); // router.delete('/orders/:id', auth, requireScope('orders:delete'), deleteOrder); // router.get('/admin/users', auth, requireScope('admin', 'users:read'), getUsers);
API Key lifecycle management
// ─── API Key model with security attributes ─────────────────── const ApiKeySchema = { keyHash: { type: String, required: true, unique: true }, // SHA-256 hash prefix: { type: String, required: true }, // first 8 chars for display userId: { type: ObjectId, required: true }, scopes: [String], // ['orders:read', 'orders:write'] active: { type: Boolean, default: true }, expiresAt: { type: Date, required: true }, // ALWAYS set expiry lastUsed: { type: Date }, name: { type: String }, // human-readable label ipAllowlist: [String], // optional IP restriction createdAt: { type: Date, default: Date.now }, }; // Max API key validity: 90 days (force rotation) const MAX_KEY_DAYS = 90; const expiresAt = new Date(Date.now() + MAX_KEY_DAYS * 86400000);
👥 Responsible
Backend DeveloperSecurity Review
📋 VAPT Closure Statement
🔒
Audit Closure — OAuth Scopes & API Keys
OAuth scopes are validated server-side on every protected endpoint via the requireScope middleware — scope claims in tokens cannot be self-escalated by clients. API keys enforce a 90-day maximum validity with mandatory rotation. IP allowlisting is available for service-to-service keys. Only the SHA-256 hash of each API key is persisted — plaintext keys are shown exactly once at creation and never stored. Key activity is logged with timestamps for audit trail.
Chapter II
Input Security & Injection Prevention
3 Controls
05
Injection Prevention — SQL, NoSQL, Command & LDAP
⌄
⚠ Issue
API endpoints pass user-controlled data directly into database queries (SQL/NoSQL), OS commands, LDAP queries, or template engines without sanitization. A single unparameterized query can expose the entire database.
🔥 Risk
Complete database exfiltration via SQL injection (' OR 1=1--), authentication bypass, data destruction, remote code execution via OS command injection, and NoSQL operator injection bypassing query filters.
🔧 Fix — Parameterized queries + sanitization layers
SQL / PostgreSQL — Always use parameterized queries
// ✗ VULNERABLE: string concatenation — DO NOT DO THIS const query = `SELECT * FROM users WHERE email = '${req.body.email}'`; // Payload: email = "' OR 1=1; DROP TABLE users;--" // ✓ SECURE: Parameterized query (pg) const { rows } = await pool.query( 'SELECT id, name, role FROM users WHERE email = $1 AND active = $2', [req.body.email, true] // parameters never interpolated into SQL string ); // ✓ SECURE: Prisma ORM (always parameterized internally) const user = await prisma.user.findUnique({ where: { email: req.body.email, active: true }, select: { id: true, name: true, role: true } // select only needed fields }); // ✓ SECURE: Knex query builder const users = await knex('users') .where({ email: req.body.email, active: true }) .select('id', 'name', 'role');
NoSQL / MongoDB — Operator injection prevention
const mongoSanitize = require('express-mongo-sanitize'); const hpp = require('hpp'); // ─── Global middleware — strip MongoDB operators from ALL inputs ─ app.use(mongoSanitize({ replaceWith: '_', onSanitize: ({ req, key }) => { logger.warn(`NOSQL_INJECTION_ATTEMPT ip=${req.ip} key=${key}`); } })); app.use(hpp()); // prevent HTTP parameter pollution // ─── NEVER use $where in Mongoose ──────────────────────────── // ✗ User.find({ '$where': 'this.credits > 0' }); ← allows JS execution // ✓ User.find({ credits: { $gt: 0 } }); ← operator from code, not user input // ─── OS Command Injection ───────────────────────────────────── // ✗ NEVER: exec(`convert ${userFilename} output.pdf`) ← shell injection // ✓ Use execFile with array args (no shell interpolation) const { execFile } = require('child_process'); execFile('convert', [sanitizedFilename, 'output.pdf'], callback); // ✓ Or use libraries (sharp, pdfkit) instead of shell commands
👥 Responsible
Backend Developer
✅ Verification
# Test SQL injection
curl -s -X POST https://api.example.com/users -d '{"email":"admin'\''--","password":"x"}'
→ HTTP 422 {"error":"Validation failed","details":[{"msg":"Valid email required"}]}
# Test NoSQL operator injection
curl -s -X POST https://api.example.com/auth/login -H "Content-Type: application/json" -d '{"email":{"$gt":""},"password":{"$gt":""}}'
→ HTTP 401 Unauthorized ← operators stripped by mongoSanitize
📋 VAPT Closure Statement
🔒
Audit Closure — Injection Prevention
All database interactions use parameterized queries (pg library / Prisma ORM) — no string concatenation in SQL queries. NoSQL injection is mitigated via express-mongo-sanitize stripping MongoDB operators from all request inputs, logged with IP for forensics. OS command execution uses execFile with argument arrays (no shell). HTTP parameter pollution is blocked by the hpp middleware. Injection attempts are verified to return validation errors (HTTP 422) or authentication failures (HTTP 401), never 200 OK.
06
Mass Assignment & Object Property Injection
⌄
⚠ Issue
APIs that directly bind request body to model objects (
User.update(req.body)) allow attackers to inject hidden fields — setting role: "admin", balance: 999999, or isVerified: true by including extra fields in their request.🔥 Risk
Privilege escalation by self-assigning admin role, financial fraud via balance manipulation, and account verification bypass — all by adding extra JSON fields to a legitimate update request.
🔧 Fix — Explicit field allowlisting
src/routes/users.js — Safe profile update
// ✗ VULNERABLE: binds entire req.body to user model await User.findByIdAndUpdate(req.user.id, req.body); // Attacker sends: { "name": "Alice", "role": "admin", "balance": 99999 } // ✓ SECURE: explicit allowlist — only update permitted fields const UPDATABLE_FIELDS = ['name', 'bio', 'avatar', 'timezone']; const pickAllowed = (body, allowed) => { return allowed.reduce((acc, key) => { if (key in body) acc[key] = body[key]; return acc; }, {}); }; const safeUpdate = pickAllowed(req.body, UPDATABLE_FIELDS); await User.findByIdAndUpdate(req.user.id, safeUpdate, { new: true, runValidators: true }); // ─── Zod schema approach (recommended) ──────────────────────── import { z } from 'zod'; const updateProfileSchema = z.object({ name: z.string().min(1).max(100).optional(), bio: z.string().max(500).optional(), timezone: z.string().optional(), // Fields NOT listed here are automatically stripped by Zod // role, isAdmin, balance, isVerified — all rejected }).strict(); // .strict() = reject any extra keys const validated = updateProfileSchema.parse(req.body);
👥 Responsible
Backend Developer
✅ Verification
curl -s -X PUT https://api.example.com/users/me -H "Authorization: Bearer $TOKEN" -d '{"name":"Alice","role":"admin","isAdmin":true}'
→ HTTP 200 {"name":"Alice"} ← only name updated, role/isAdmin stripped ✓
curl -s https://api.example.com/users/me -H "Authorization: Bearer $TOKEN"
→ {"name":"Alice","role":"user"} ← role unchanged ✓
📋 VAPT Closure Statement
🔒
Audit Closure — Mass Assignment (API3:2023)
All update endpoints use explicit field allowlists (or Zod .strict() schemas) to accept only declared, permitted fields. Sensitive fields (role, isAdmin, balance, isVerified) are never exposed in update routes. Mongoose runValidators: true ensures schema-level validation runs on updates. Verified: submitting role: admin in update request has no effect on the persisted user role.
07
API Request Schema Validation
⌄
⚠ Issue
APIs accept malformed, oversized, or malicious input without validation — enabling DoS via huge payloads, injection via unchecked strings, and logic errors via unexpected data types (string where integer expected).
🔥 Risk
Memory exhaustion via oversized JSON payloads, type coercion attacks (sending strings where numbers expected), business logic bypass via missing required fields, and injection via unvalidated string formats.
🔧 Fix — Comprehensive Zod schema validation
src/schemas/order.schema.js
import { z } from 'zod'; export const createOrderSchema = z.object({ items: z.array(z.object({ productId: z.string().uuid('Product ID must be a UUID'), quantity: z.number().int().min(1).max(100), })).min(1).max(50), deliveryAddress: z.object({ street: z.string().min(1).max(200), city: z.string().min(1).max(100), country: z.string().length(2), // ISO 3166-1 alpha-2 zip: z.string().regex(/^[\d\-A-Z]{3,10}$/), }), couponCode: z.string() .regex(/^[A-Z0-9\-]{4,20}$/, 'Invalid coupon format') .optional(), notes: z.string().max(500).optional(), }).strict(); // reject any extra fields // ─── Validation middleware factory ──────────────────────────── const validate = (schema) => (req, res, next) => { const result = schema.safeParse(req.body); if (!result.success) { return res.status(422).json({ error: 'Validation failed', details: result.error.flatten().fieldErrors, }); } req.body = result.data; // replace with validated + stripped data next(); }; // ─── Body size limit (DoS prevention) ──────────────────────── app.use(express.json({ limit: '10kb' })); // reject > 10KB app.use(express.urlencoded({ limit: '10kb', extended: false }));
👥 Responsible
Backend Developer
📋 VAPT Closure Statement
🔒
Audit Closure — Schema Validation
All API endpoints enforce strict Zod schema validation with type checking, length limits, format validation (UUID, email, regex patterns), and field count limits. Request body size is capped at 10KB globally. Zod .strict() rejects undeclared fields. Validation errors return structured HTTP 422 responses with field-level details. The validated data object replaces the raw request body in the middleware pipeline — downstream handlers only receive clean, validated data.
Chapter III
Rate Limiting & Access Control
3 Controls
08
Rate Limiting, Throttling & Quota Management
⌄
⚠ Issue
APIs without rate limiting are open to brute-force attacks, resource exhaustion, enumeration attacks (scraping all records), and abuse of expensive compute operations (ML endpoints, PDF generation, email sending).
🔥 Risk
API abuse at scale — complete data exfiltration via pagination enumeration, server resource exhaustion (CPU/memory DoS), financial abuse of metered operations, and credential brute-force at API speed.
🔧 Fix — Multi-layer rate limiting
src/middleware/rateLimits.js — Tiered rate limiting
const rateLimit = require('express-rate-limit'); const RedisStore = require('rate-limit-redis').default; const redis = require('ioredis'); const slowDown = require('express-slow-down'); const redisClient = new redis(process.env.REDIS_URL); // ─── Helper factory ─────────────────────────────────────────── const makeLimit = (windowMin, max, message) => rateLimit({ windowMs: windowMin * 60 * 1000, max, standardHeaders: true, legacyHeaders: false, store: new RedisStore({ sendCommand: (...a) => redisClient.call(...a) }), message: { error: message, retryAfter: 'See Retry-After header' }, skip: (req) => req.ip === '127.0.0.1', // skip localhost }); // ─── Tiered limits ──────────────────────────────────────────── module.exports = { // Auth: very strict — 5 attempts per 15 min authLimit: makeLimit(15, 5, 'Too many auth attempts'), // API general: 100 req per 15 min per IP apiLimit: makeLimit(15, 100, 'API rate limit exceeded'), // Expensive ops (AI, PDF, email): 10 req per hour heavyLimit: makeLimit(60, 10, 'Operation rate limit exceeded'), // Password reset: 3 per hour resetLimit: makeLimit(60, 3, 'Too many reset requests'), // Write operations: 60 per 10 min (prevent scraping via writes) writeLimit: makeLimit(10, 60, 'Write rate limit exceeded'), // Progressive slowdown (speed throttle before hard limit) speedLimiter: slowDown({ windowMs: 15 * 60 * 1000, delayAfter: 50, delayMs: (hits) => hits * 100, // +100ms per hit above 50 }), }; // ─── Apply in routes ────────────────────────────────────────── // app.use('/api/', apiLimit, speedLimiter); // app.post('/auth/login', authLimit, loginController); // app.post('/api/generate-pdf', authenticate, heavyLimit, generatePdf);
Nginx upstream rate limiting (server-level)
# nginx.conf — Nginx rate limiting as first layer limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s; limit_req_zone $binary_remote_addr zone=auth:1m rate=3r/m; server { location /api/ { limit_req zone=api burst=60 nodelay; limit_req_status 429; proxy_pass http://node_backend; } location /api/auth/ { limit_req zone=auth burst=5 nodelay; limit_req_status 429; proxy_pass http://node_backend; } }
👥 Responsible
Backend DeveloperDevOps/Nginx Admin
✅ Verification
for i in $(seq 1 110); do curl -s -o /dev/null -w "%{http_code}\n" https://api.example.com/api/orders; done | sort | uniq -c
→ 100 200 (first 100 requests)
→ 10 429 (requests 101-110 rate limited) ✓
curl -I https://api.example.com/api/orders 2>&1 | grep -i ratelimit
→ ratelimit-limit: 100
→ ratelimit-remaining: 97
→ ratelimit-reset: 1714000000
📋 VAPT Closure Statement
🔒
Audit Closure — Rate Limiting (API4:2023)
Rate limiting is implemented at two layers: Nginx (30 req/s with burst; 3 req/min on auth) and Express application middleware (Redis-backed, distributed across instances). Tiered limits apply: auth endpoints (5/15min), general API (100/15min), expensive operations (10/hour). RateLimit standard headers are sent per RFC 6585. Progressive slowdown precedes hard limits. Verified: requests beyond threshold receive HTTP 429 with Retry-After header.
09
Broken Function Level Authorization (BFLA)
⌄
⚠ Issue
Admin or privileged API endpoints are accessible to regular users because the server doesn't check the caller's role — only whether they're authenticated. Hidden admin routes discovered via JS source analysis or API fuzzing.
🔥 Risk
Regular users accessing admin functions — deleting other users' accounts, viewing all users' PII, triggering system-level operations, modifying system configuration, or accessing analytics data.
🔧 Fix — Role-Based Access Control middleware
src/middleware/rbac.js — Flexible RBAC middleware
/** * requireRole — enforce role-based access on routes. * Supports multiple allowed roles and hierarchical check. */ const requireRole = (...allowedRoles) => (req, res, next) => { if (!req.user) { return res.status(401).json({ error: 'Authentication required' }); } const userRole = req.user.role; if (!allowedRoles.includes(userRole)) { // Log unauthorized function access attempt logger.warn({ event: 'UNAUTHORIZED_FUNCTION_ACCESS', userId: req.user.id, role: userRole, required: allowedRoles, path: req.path, ip: req.ip, }); return res.status(403).json({ error: 'Insufficient privileges' }); } next(); }; // ─── Role hierarchy ─────────────────────────────────────────── const ROLES = { USER: 'user', MODERATOR: 'moderator', ADMIN: 'admin', SUPER_ADMIN: 'super_admin', }; // ─── Route examples ─────────────────────────────────────────── // User routes (any authenticated user) // router.get('/me', authenticate, getMyProfile); // Moderator+ routes // router.get('/reports', authenticate, requireRole('moderator','admin'), getReports); // Admin-only routes // router.get('/admin/users', authenticate, requireRole('admin'), getAllUsers); // router.delete('/users/:id', authenticate, requireRole('admin'), deleteUser); // Super admin only // router.post('/system/config', authenticate, requireRole('super_admin'), updateConfig); module.exports = { requireRole, ROLES };
👥 Responsible
Backend Developer
✅ Verification
# Access admin endpoint as regular user
curl -H "Authorization: Bearer $USER_TOKEN" https://api.example.com/admin/users
→ HTTP 403 {"error":"Insufficient privileges"}
# Access admin endpoint as admin user
curl -H "Authorization: Bearer $ADMIN_TOKEN" https://api.example.com/admin/users
→ HTTP 200 [{"id":"...","name":"..."}]
# Verify BFLA attempt is logged
grep "UNAUTHORIZED_FUNCTION_ACCESS" /var/log/app/combined.log | tail -3
→ {"event":"UNAUTHORIZED_FUNCTION_ACCESS","userId":"abc","role":"user","required":["admin"],...}
📋 VAPT Closure Statement
🔒
Audit Closure — BFLA (API5:2023)
All admin and privileged API endpoints enforce role-based access control via the requireRole middleware applied at the route level. Regular user tokens receive HTTP 403 on admin endpoints regardless of authentication status. BFLA attempt events are logged with user ID, role, required roles, endpoint, and IP for forensic monitoring. An automated test suite verifies that each role tier cannot access endpoints above its privilege level.
10
Excessive Data Exposure — Response Filtering
⌄
⚠ Issue
APIs return the full database object to the client and rely on the frontend to hide sensitive fields. Password hashes, internal IDs, secret tokens, PII, and metadata are exposed in API responses even when not needed by the UI.
🔥 Risk
Password hash leakage (offline cracking), internal system IDs (enumeration), PII exposure (GDPR violation), and security token disclosure — all visible in browser DevTools Network tab to any logged-in user.
🔧 Fix — Response DTOs and field selection
src/dto/user.dto.js — Data Transfer Objects
// ─── User DTO: what the API returns vs what's in the DB ─────── class UserPublicDto { constructor(user) { // ✓ Safe to return — explicitly listed this.id = user._id.toString(); this.name = user.name; this.avatarUrl = user.avatarUrl; this.joinedAt = user.createdAt; // Fields NOT included (never exposed): // password, passwordHash, salt, securityToken, // resetToken, apiKeys, internalNotes, ipHistory, // stripeCustomerId, lastLoginIp, failedLoginCount } } class UserPrivateDto { constructor(user) { // Returned only to the user for their own profile this.id = user._id.toString(); this.name = user.name; this.email = user.email; this.avatarUrl = user.avatarUrl; this.role = user.role; this.joinedAt = user.createdAt; // Still NOT included: password, tokens, etc. } } // ─── Usage in controllers ───────────────────────────────────── // res.json(new UserPublicDto(user)); ← public profile // res.json(new UserPrivateDto(user)); ← own profile // res.json(users.map(u => new UserPublicDto(u))); ← list // ─── MongoDB: always use .select() to limit DB fields ───────── const user = await User .findById(id) .select('-password -resetToken -apiKeys -securityToken') .lean(); // ─── Prisma: always use select ──────────────────────────────── const user = await prisma.user.findUnique({ where: { id }, select: { id: true, name: true, email: true, avatarUrl: true }, // password, tokens etc: NOT in select = not fetched });
👥 Responsible
Backend Developer
✅ Verification
curl -s -H "Authorization: Bearer $TOKEN" https://api.example.com/users/me | python3 -m json.tool
→ {"id":"...","name":"Alice","email":"alice@x.com","avatarUrl":"...","joinedAt":"..."}
# Must NOT contain: password, passwordHash, resetToken, apiKeys, internalNotes
curl -s -H "Authorization: Bearer $TOKEN" https://api.example.com/users/me | grep -iE 'password|hash|token|secret|key'
→ (empty — no sensitive fields in response)
📋 VAPT Closure Statement
🔒
Audit Closure — Excessive Data Exposure (API6:2023)
All API responses use explicit Data Transfer Object (DTO) classes that whitelist only the fields appropriate for the consumer context. Database queries use .select() field exclusions to prevent over-fetching. Sensitive fields (password hashes, tokens, internal metadata, PII beyond display needs) are confirmed absent from API responses via automated response schema validation. MongoDB -password -resetToken projection prevents accidental leakage at the query level.
Chapter IV
Transport Security & HTTP Headers
3 Controls
11
HTTPS Enforcement & TLS Hardening
⌄
⚠ Issue
API communicating over HTTP, weak TLS versions (1.0/1.1), broken cipher suites (CBC, RC4, 3DES), missing HSTS headers allowing protocol downgrade, and self-signed certificates without verification in clients.
🔥 Risk
Token interception via MITM on HTTP or weak TLS connections, POODLE/BEAST/CRIME attacks on legacy cipher suites, and SSL stripping attacks without HSTS.
🔧 Fix — Nginx TLS hardening for API
/etc/nginx/sites-enabled/api.conf
# ─── Redirect all HTTP to HTTPS ────────────────────────────── server { listen 80; server_name api.example.com; return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name api.example.com; ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem; # ─── TLS version: only 1.2 and 1.3 ────────────────────────── ssl_protocols TLSv1.2 TLSv1.3; # ─── Strong ciphers: ECDHE + AEAD only ─────────────────────── ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305'; ssl_prefer_server_ciphers off; ssl_session_cache shared:SSL:10m; ssl_session_timeout 1d; ssl_session_tickets off; # ─── OCSP stapling ─────────────────────────────────────────── ssl_stapling on; ssl_stapling_verify on; resolver 1.1.1.1 8.8.8.8; # ─── HSTS: 2 years + preload ───────────────────────────────── add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; # ─── Proxy to Node.js (bound to localhost only) ─────────────── location / { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
Node.js — Trust proxy + enforce HTTPS
// Trust Nginx reverse proxy app.set('trust proxy', 1); // Redirect HTTP → HTTPS at app level (backup) app.use((req, res, next) => { if (req.headers['x-forwarded-proto'] !== 'https') { return res.redirect(301, `https://${req.headers.host}${req.url}`); } next(); }); // HSTS header from Node.js (belt + suspenders) app.use(helmet({ hsts: { maxAge: 63072000, includeSubDomains: true, preload: true } }));
👥 Responsible
DevOps / Nginx AdminBackend Developer
✅ Verification
# SSL Labs grade (expect A+)
curl -s "https://api.ssllabs.com/api/v3/analyze?host=api.example.com" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['endpoints'][0]['grade'])"
→ A+
# HSTS header check
curl -I https://api.example.com/ | grep -i strict
→ strict-transport-security: max-age=63072000; includeSubDomains; preload
# TLS 1.0/1.1 must be rejected
openssl s_client -connect api.example.com:443 -tls1 2>&1 | grep "handshake failure"
→ handshake failure (TLS 1.0 rejected) ✓
📋 VAPT Closure Statement
🔒
Audit Closure — HTTPS / TLS (API10:2023)
API is served exclusively over HTTPS — HTTP requests receive HTTP 301 redirect. Nginx TLS configuration enforces TLSv1.2/1.3 only with ECDHE+AEAD cipher suites; legacy protocols (TLS 1.0, 1.1, SSLv3) and weak ciphers (CBC, RC4, 3DES) are disabled. HSTS header is set with 2-year max-age and preload directive. OCSP stapling is active. SSL Labs reports Grade A+. Node.js is bound to localhost and proxied through Nginx — no direct port exposure.
12
API-Specific Security Response Headers
⌄
⚠ Issue
API responses missing security headers enable MIME sniffing attacks, expose framework information (
X-Powered-By: Express), allow sensitive responses to be cached by intermediaries, and expose API internals via response timing headers.🔥 Risk
MIME confusion attacks on uploaded content, authentication tokens cached in proxy servers (leaked to subsequent users on shared machines), framework fingerprinting enabling targeted CVE exploitation.
🔧 Fix — Helmet.js + custom API headers
src/app.js — Helmet + API security headers
const helmet = require('helmet'); // ─── Remove Express fingerprint ─────────────────────────────── app.disable('x-powered-by'); // ─── Helmet: full suite of security headers ─────────────────── app.use(helmet({ contentSecurityPolicy: { directives: { defaultSrc: ["'none'"], // API: block all browser rendering frameAncestors: ["'none'"], // prevent framing } }, noSniff: true, // X-Content-Type-Options: nosniff xssFilter: true, // X-XSS-Protection frameguard: { action: 'deny' }, referrerPolicy: { policy: 'no-referrer' }, // API: no referrer leakage permittedCrossDomainPolicies: false, hsts: { maxAge: 63072000, includeSubDomains: true, preload: true }, })); // ─── API-specific headers ───────────────────────────────────── app.use((req, res, next) => { // Ensure responses are treated as JSON (not HTML) res.setHeader('Content-Type', 'application/json; charset=utf-8'); // No caching for API responses (especially auth responses) res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate'); res.setHeader('Pragma', 'no-cache'); res.setHeader('Expires', '0'); // Correlation ID for request tracing const reqId = req.headers['x-request-id'] || require('crypto').randomUUID(); res.setHeader('X-Request-ID', reqId); req.requestId = reqId; next(); }); // ─── Allowed cache only for specific static/public content ──── app.get('/api/v1/products', (req, res, next) => { res.setHeader('Cache-Control', 'public, max-age=60, s-maxage=300'); // 5 min CDN next(); }, getProducts);
👥 Responsible
Backend Developer
✅ Verification
curl -I https://api.example.com/api/orders | grep -iE '(x-powered|x-content|cache|x-request|content-type)'
→ x-content-type-options: nosniff
→ cache-control: no-store, no-cache, must-revalidate
→ x-request-id: 550e8400-e29b-41d4-a716-446655440000
→ content-type: application/json; charset=utf-8
→ (x-powered-by: NOT present) ✓
📋 VAPT Closure Statement
🔒
Audit Closure — Security Headers
Helmet.js middleware enforces X-Content-Type-Options (nosniff), X-Frame-Options (DENY), Referrer-Policy (no-referrer), and CSP (default-src: none for API). Framework disclosure (X-Powered-By) is suppressed. All API responses set Cache-Control: no-store to prevent credential caching at proxies. X-Request-ID correlation header enables full request tracing. Content-Type is explicitly set to application/json. Response header audit confirms all required headers present and no sensitive headers exposed.
13
CORS — Cross-Origin Resource Sharing Hardening
⌄
⚠ Issue
Wildcard CORS (
Access-Control-Allow-Origin: *) on credentialed APIs, reflecting arbitrary Origin headers without validation, and allowing all HTTP methods when only GET/POST are needed create cross-site data exfiltration risks.🔥 Risk
Cross-origin attacks from malicious websites making authenticated requests to your API using the victim's cookies, reading private data, and triggering authenticated actions on their behalf.
🔧 Fix — Strict origin whitelist CORS
src/middleware/cors.js
const cors = require('cors'); const ALLOWED_ORIGINS = new Set([ 'https://your-app.com', 'https://www.your-app.com', 'https://admin.your-app.com', ...(process.env.NODE_ENV === 'development' ? ['http://localhost:3000'] : []), ]); const corsOptions = { origin: (origin, callback) => { // Allow server-to-server (no browser origin header) if (!origin) return callback(null, true); // ✗ NEVER: Access-Control-Allow-Origin: * with credentials if (ALLOWED_ORIGINS.has(origin)) { return callback(null, true); } callback(new Error(`CORS blocked: ${origin}`)); }, credentials: true, methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'], allowedHeaders: ['Content-Type', 'Authorization', 'X-Request-ID', 'X-CSRF-Token'], exposedHeaders: ['X-Request-ID', 'RateLimit-Limit', 'RateLimit-Remaining'], maxAge: 86400, // cache preflight 24h optionsSuccessStatus: 200, }; // Handle preflight before routes app.options('*', cors(corsOptions)); app.use(cors(corsOptions));
👥 Responsible
Backend Developer
✅ Verification
# Allowed origin
curl -H "Origin: https://your-app.com" -I https://api.example.com/api/orders
→ access-control-allow-origin: https://your-app.com
# Disallowed origin — must be blocked
curl -H "Origin: https://evil.com" -I https://api.example.com/api/orders
→ (no access-control-allow-origin header) ✓
# Wildcard must NEVER appear with credentials
curl -I https://api.example.com/api/ | grep "access-control-allow-origin"
→ must NOT be * (wildcard)
📋 VAPT Closure Statement
🔒
Audit Closure — CORS Hardening
CORS enforces an explicit domain whitelist — wildcard origin is not used. Unapproved origins receive no CORS headers (requests blocked by browser). Preflight responses are cached for 24 hours. Credentials mode requires exact origin match. Allowed headers are explicitly listed (no Access-Control-Allow-Headers: *). Development localhost origin is excluded from production configuration via NODE_ENV check. Verified: requests from evil.com origin receive no CORS headers.
Chapter V
GraphQL, Versioning & Observability
5 Controls
14
GraphQL Security Hardening
⌄
⚠ Issue
GraphQL APIs expose: introspection (full schema disclosure to attackers), deeply nested queries causing exponential DB calls (DoS), batching attacks multiplying N requests into 1, field suggestion leaking schema to unauthorized users, and unlimited query complexity.
🔥 Risk
Schema enumeration via introspection guides targeted attacks, deeply nested { user { friends { friends { friends { ... }}}}} causes exponential N+1 query DoS, and query batching bypasses rate limiting (1 HTTP request = 1000 operations).
🔧 Fix — GraphQL security layer
src/graphql/server.js — Hardened Apollo Server
const { ApolloServer } = require('@apollo/server'); const depthLimit = require('graphql-depth-limit'); const { createComplexityLimit } = require('graphql-validation-complexity'); const server = new ApolloServer({ typeDefs, resolvers, // ─── Disable introspection in production ────────────────────── introspection: process.env.NODE_ENV !== 'production', validationRules: [ // Max query depth: 7 levels (prevents deeply nested DoS) depthLimit(7), // Max query complexity score (prevents expensive field combos) createComplexityLimit(1000, { scalarCost: 1, objectCost: 10, listFactor: 10, }), ], // ─── Disable field suggestions (prevents schema leakage) ────── plugins: [ { requestDidStart: () => ({ didEncounterErrors: ({ errors }) => { // Replace "Did you mean X?" suggestions in error messages if (errors) { errors.forEach(err => { if (err.message.includes('Did you mean')) { err.message = err.message.replace(/Did you mean.*?\?/g, ''); } }); } }, }), }, ], }); // ─── Batch query limit ──────────────────────────────────────── app.use('/graphql', (req, res, next) => { // Reject arrays of operations (batching attack) if (Array.isArray(req.body)) { return res.status(400).json({ error: 'Batched queries not allowed' }); } // Max query string size if (req.body?.query?.length > 2000) { return res.status(400).json({ error: 'Query too long' }); } next(); }, expressMiddleware(server));
👥 Responsible
Backend Developer
✅ Verification
# Test introspection disabled in production
curl -s -X POST https://api.example.com/graphql -d '{"query":"{__schema{types{name}}}"}' | python3 -c "import sys,json; d=json.load(sys.stdin); print('BLOCKED' if 'errors' in d else 'EXPOSED')"
→ BLOCKED (introspection disabled in production)
# Test depth limit
curl -s -X POST https://api.example.com/graphql -d '{"query":"{user{friends{friends{friends{friends{friends{friends{friends{id}}}}}}}}}"}'
→ {"errors":[{"message":"'user' exceeds maximum operation depth of 7"}]}
# Test batching blocked
curl -s -X POST https://api.example.com/graphql -d '[{"query":"{me{id}}"},{"query":"{users{id}}"}]'
→ HTTP 400 {"error":"Batched queries not allowed"}
📋 VAPT Closure Statement
🔒
Audit Closure — GraphQL Security
GraphQL introspection is disabled in production (enabled only in development). Query depth is limited to 7 levels via graphql-depth-limit. Query complexity scoring (max 1000 units) prevents expensive query combinations. Batch query arrays are rejected at the middleware level (HTTP 400). Query string length is capped at 2000 characters. Field suggestion messages are stripped from errors to prevent schema inference. All restrictions verified via targeted test queries.
15
API Versioning, Deprecation & Security Lifecycle
⌄
⚠ Issue
Old API versions with known vulnerabilities remain active indefinitely. Clients using deprecated endpoints with weaker authentication or removed security controls. No versioning strategy means all clients must upgrade simultaneously — often impossible.
🔥 Risk
Attackers target deprecated API versions with known CVEs, bypassing security controls added in newer versions. Long-lived API keys on v1 with weaker validation remain exploitable indefinitely.
🔧 Fix — URL versioning + deprecation headers
src/middleware/deprecation.js
// ─── Mark deprecated API versions ───────────────────────────── const DEPRECATED_VERSIONS = { 'v1': { sunsetDate: '2026-12-31', successor: '/v2' }, 'v2': { sunsetDate: '2027-06-30', successor: '/v3' }, }; const deprecationHeaders = (req, res, next) => { const versionMatch = req.path.match(/\/(v\d+)\//); const version = versionMatch?.[1]; if (version && DEPRECATED_VERSIONS[version]) { const { sunsetDate, successor } = DEPRECATED_VERSIONS[version]; // RFC 8594 — Sunset header standard res.setHeader('Sunset', new Date(sunsetDate).toUTCString()); res.setHeader('Deprecation', 'true'); res.setHeader('Link', `<${successor}>; rel="successor-version"`); } next(); }; // ─── Hard block for end-of-life versions ────────────────────── const EOL_VERSIONS = ['v0']; // versions that have been shut down const blockEolVersions = (req, res, next) => { const versionMatch = req.path.match(/\/(v\d+)\//); const version = versionMatch?.[1]; if (EOL_VERSIONS.includes(version)) { return res.status(410).json({ error: 'Gone', message: `API ${version} has been retired. Please migrate to /v3.`, docs: 'https://docs.example.com/migration', }); } next(); }; app.use(blockEolVersions, deprecationHeaders);
👥 Responsible
Backend DeveloperDevOps
📋 VAPT Closure Statement
🔒
Audit Closure — API Versioning (API9:2023)
API versioning follows URL path convention (/v1/, /v2/). Deprecated versions return Sunset and Deprecation: true headers per RFC 8594, informing clients of the retirement timeline. End-of-life versions return HTTP 410 (Gone) with migration instructions. Security controls (authentication, rate limiting, validation) are applied uniformly across all active versions — no security regression in older versions. A documented sunset process requires 90-day notice before retirement.
16
API Logging, Monitoring & Anomaly Detection
⌄
⚠ Issue
APIs without structured logging make security events (failed auth, rate-limit hits, BOLA attempts) invisible. Logging sensitive data (passwords, tokens, PII) in access logs creates new data exposure. No monitoring means attacks run undetected for days.
🔥 Risk
OWASP API7:2023 — Insufficient Logging & Monitoring. Average breach detection time: 277 days. API attacks succeed silently without alerting. Compliance failures (PCI-DSS Req 10, GDPR Art 33) for failing to detect and report breaches.
🔧 Fix — Structured security event logging
src/utils/securityLogger.js — Security event audit trail
const winston = require('winston'); const logger = winston.createLogger({ level: 'info', format: winston.format.combine( winston.format.timestamp(), winston.format.errors({ stack: true }), winston.format.json() // machine-parseable JSON for SIEM ), defaultMeta: { service: 'api', version: process.env.API_VERSION }, transports: [ new winston.transports.File({ filename: '/var/log/api/error.log', level: 'error' }), new winston.transports.File({ filename: '/var/log/api/security.log', level: 'warn' }), new winston.transports.File({ filename: '/var/log/api/combined.log' }), ], }); // ─── Security event helper ──────────────────────────────────── const secEvent = (event, req, data = {}) => { logger.warn({ event, requestId: req.requestId, ip: req.ip, userId: req.user?.id || 'anonymous', userAgent: req.headers['user-agent'], method: req.method, path: req.path, // NEVER log: passwords, tokens, full request body with PII ...data, }); }; // ─── Structured HTTP access log ─────────────────────────────── const morgan = require('morgan'); app.use(morgan(function(tokens, req, res) { return JSON.stringify({ method: tokens.method(req, res), url: tokens.url(req, res), status: Number(tokens.status(req, res)), duration: Number(tokens['response-time'](req, res)), requestId: req.requestId, ip: tokens['remote-addr'](req, res), // ✗ NEVER log Authorization header, cookies, or API keys }); }, { stream: { write: msg => logger.info(JSON.parse(msg)) } })); // ─── Use secEvent for key security events ───────────────────── // Login failure: secEvent('LOGIN_FAILED', req, { email: req.body.email }) // BOLA attempt: secEvent('BOLA_ATTEMPT', req, { resourceId }) // Rate limit hit: secEvent('RATE_LIMITED', req) // Unauthorized function: secEvent('BFLA_ATTEMPT', req, { requiredRole }) // Injection attempt: secEvent('INJECTION_ATTEMPT', req, { field: 'email' })
👥 Responsible
Backend DeveloperSecurity Team
📋 VAPT Closure Statement
🔒
Audit Closure — Logging & Monitoring (API7:2023)
Structured JSON logging via Winston captures all security events (login failures, BOLA attempts, BFLA attempts, rate limit triggers, injection patterns) with requestId correlation, IP, user ID, and timestamp. Authorization headers, cookies, and API keys are excluded from all log entries. Log files are segregated (security.log, error.log, combined.log) with 90-day retention and forwarding to SIEM. Anomaly detection rules alert on >10 consecutive 401s from the same IP within 5 minutes.
17
Error Handling & Information Disclosure
⌄
⚠ Issue
Production APIs returning stack traces, database error messages (revealing table names, column names), framework version strings, and internal file paths — all inadvertently shared with attackers in error responses.
🔥 Risk
Stack traces reveal internal architecture, database schema from SQL error messages, file paths from Node.js errors aid targeted attacks, and library versions from error headers enable CVE-targeted exploitation.
🔧 Fix — RFC 7807 Problem Details + safe error handler
src/middleware/errorHandler.js — Production-safe error handler
/** * Centralized API error handler. * RFC 7807 — Problem Details for HTTP APIs (application/problem+json) * Strips internal details in production. */ const { v4: uuid } = require('uuid'); // ─── Custom API error class ─────────────────────────────────── class ApiError extends Error { constructor(status, type, title, detail, extra = {}) { super(title); this.status = status; this.type = type; // e.g., '/errors/validation-failed' this.title = title; // human-readable summary this.detail = detail; // safe detail for client this.extra = extra; // extra fields (errors, hints) this.isApiError = true; } } // ─── Global error handler (MUST be last middleware) ─────────── const errorHandler = (err, req, res, next) => { const isProd = process.env.NODE_ENV === 'production'; const errorId = uuid(); // Always log full error server-side logger.error({ errorId, message: err.message, stack: err.stack, requestId: req.requestId, path: req.path, userId: req.user?.id, }); // ─── Known API errors ───────────────────────────────────────── if (err.isApiError) { return res.status(err.status) .type('application/problem+json') .json({ type: err.type, title: err.title, status: err.status, detail: err.detail, instance: req.path, requestId: req.requestId, ...err.extra, }); } // ─── Unknown errors: never expose internals in prod ─────────── res.status(500) .type('application/problem+json') .json({ type: '/errors/internal-error', title: 'An unexpected error occurred', status: 500, detail: isProd ? 'Please contact support' : err.message, requestId: req.requestId, errorId, // share error ID — NOT the stack trace // stack: isProd ? undefined : err.stack ← never in prod }); }; module.exports = { ApiError, errorHandler };
👥 Responsible
Backend Developer
✅ Verification
# Trigger a 500 error — production must NOT show stack trace
curl -s https://api.example.com/api/broken-endpoint | python3 -m json.tool
→ {"type":"/errors/internal-error","title":"An unexpected error occurred","status":500,"requestId":"...","errorId":"..."}
# stack, message, filePath: NOT present in production response ✓
# Server logs MUST have full error details
grep "$errorId" /var/log/api/error.log
→ {"errorId":"...","message":"Cannot read property...","stack":"Error: ...\n at /var/www/...",...}
📋 VAPT Closure Statement
🔒
Audit Closure — Error Handling & Disclosure
A centralized error handler strips all internal details (stack traces, file paths, library names, database messages) from production responses. Errors follow RFC 7807 Problem Details format, returning only a human-readable title, HTTP status, request ID, and an error correlation ID. Full error details are logged server-side for forensic investigation. Production responses are confirmed to contain zero stack traces, SQL queries, file paths, or framework version strings via automated response inspection.
18
API Gateway Hardening & mTLS
⌄
⚠ Issue
APIs exposed directly without a gateway layer lack centralized authentication, rate limiting, WAF protection, and audit logging. Service-to-service communication over plain HTTP (no mTLS) allows lateral movement after a service is compromised.
🔥 Risk
Internal service impersonation without mTLS, bypass of gateway-level WAF by targeting services directly, and lack of centralized API visibility for security monitoring across a microservices architecture.
🔧 Fix — Nginx as API gateway + mTLS for internal services
/etc/nginx/sites-enabled/api-gateway.conf — Gateway hardening
# ─── Nginx as API Gateway ───────────────────────────────────── upstream api_backend { server 127.0.0.1:3000; keepalive 32; } server { listen 443 ssl http2; server_name api.example.com; # ─── Global WAF-like rules ──────────────────────────────────── # Block common attack patterns set $block_query 0; if ($query_string ~* "union.*select.*\(") { set $block_query 1; } if ($query_string ~* "<script") { set $block_query 1; } if ($query_string ~* "javascript:") { set $block_query 1; } if ($block_query = 1) { return 403; } # Block scanner user agents if ($http_user_agent ~* "(nikto|sqlmap|nmap|masscan|dirbuster|gobuster)") { return 403; } # Max request body size client_max_body_size 10m; # ─── Proxy to backend ───────────────────────────────────────── location /api/ { limit_req zone=api burst=60 nodelay; limit_req_status 429; proxy_pass http://api_backend; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Request-ID $request_id; # Timeouts (prevent slow loris attacks) proxy_connect_timeout 10s; proxy_read_timeout 30s; proxy_send_timeout 10s; } # ─── Block direct access to internal paths ──────────────────── location ~ ^/api/(internal|admin/system)/ { allow 10.0.0.0/8; # only internal network deny all; proxy_pass http://api_backend; } }
mTLS — Service-to-service mutual TLS
# ─── Generate service certificates (internal CA) ────────────── # 1. Create internal CA $ openssl genrsa -out ca.key 4096 $ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt -subj "/CN=Internal-API-CA" # 2. Generate cert for each microservice $ openssl genrsa -out order-service.key 2048 $ openssl req -new -key order-service.key -out order-service.csr -subj "/CN=order-service" $ openssl x509 -req -days 365 -in order-service.csr -CA ca.crt -CAkey ca.key -out order-service.crt # ─── Node.js: require mTLS for internal API calls ───────────── const https = require('https'); const fs = require('fs'); const agent = new https.Agent({ cert: fs.readFileSync('/certs/order-service.crt'), key: fs.readFileSync('/certs/order-service.key'), ca: fs.readFileSync('/certs/ca.crt'), rejectUnauthorized: true, // ALWAYS verify peer certificate }); // All internal service calls use mTLS agent const axios = require('axios'); const internalApi = axios.create({ httpsAgent: agent });
👥 Responsible
DevOpsSecurity Team
📋 VAPT Closure Statement
🔒
Audit Closure — API Gateway & mTLS
Nginx serves as the API gateway with WAF-style rules blocking common injection patterns, vulnerability scanner user agents, and oversized payloads. Internal admin paths are restricted to the internal network (10.0.0.0/8). Request/response timeouts prevent slow loris attacks. Mutual TLS is implemented for service-to-service communication with certificate verification — unauthorized services cannot make calls to internal APIs. Client certificates are signed by an internal CA with 365-day validity and rotation policy.