Cyber security · Web App Security

OWASP Top 10 in Real Life: How Apps Actually Get Hacked

Concrete examples of each risk + what good fixes look like.

Reading time: ~8–12 min
Level: All levels
Updated:

The OWASP Top 10 isn’t a list of “theoretical vulnerabilities” — it’s a map of how real apps fail in production. In this guide, we’ll walk through each risk with a concrete, realistic scenario, then translate it into fixes that teams actually ship: authorization patterns, secure defaults, supply chain hygiene, and failure-safe behaviors.

This is a defensive, builder-focused post: the goal is to help you recognize the shape of common attacks and prevent them without turning your codebase into a pile of security tape.


Quickstart

If you only have 60–90 minutes, do these in order. They catch the most common “we got hacked and don’t know how” incidents without needing a full security program.

1) Fix access control at the server (deny-by-default)

Most web app breaches aren’t “Hollywood hacking” — they’re someone doing a normal request to a resource they shouldn’t see. The client is not a security boundary; your API is.

  • Audit endpoints that use IDs: /users/:id, /orders/:id, /orgs/:orgId
  • Enforce ownership/tenant checks on every request (not just UI)
  • Prefer deny-by-default: allow only what’s explicitly permitted
  • Log authorization failures (with user + resource identifiers)

2) Kill insecure defaults (misconfig + secrets)

Debug toggles, overly-permissive CORS, open admin panels, and leaked secrets are “free wins” for attackers. Secure configuration is a feature — treat it like one.

  • Disable debug/stack traces in production
  • Restrict CORS to known origins and methods
  • Set secure cookie flags: HttpOnly, Secure, SameSite
  • Rotate any secret that ever touched logs, commits, or chat

3) Lock down the supply chain (dependencies + CI)

Modern apps are assembled from packages, containers, and build pipelines. If you can’t trust the build, you can’t trust the app.

  • Pin CI actions/build steps to immutable versions (or digests)
  • Require reviews for dependency upgrades
  • Generate an SBOM and keep it with releases
  • Store secrets in a manager; never in repo variables

4) Improve detection + failure behavior

Prevention is necessary — but you also need to notice when prevention fails. The fastest “real-life hardening” is logging and safe failure.

  • Log auth events, privilege changes, and “high value” actions
  • Alert on spikes in errors, auth failures, and unusual access patterns
  • Return generic errors to users; keep details in logs
  • Add timeouts and rate limits to stop resource exhaustion
How to use this post

Read the Top 10 sections like a checklist: for each category, ask “Do we have a consistent pattern for this?” If the answer is “it depends” — that’s where incidents happen.

Overview

The OWASP Top 10 is a standard awareness list of the most critical risks in web apps. “Top” doesn’t mean “most common bug in your codebase” — it means the patterns that repeatedly lead to real compromises: account takeover, data leakage, malicious updates, and outages that become security incidents.

What you’ll get from this guide

  • Realistic attack stories (no exploit payloads, just the shape of the failure)
  • Practical fixes you can implement as reusable patterns
  • A mini-audit workflow you can run on any app in a day
  • A cheatsheet to keep in PR reviews and threat modeling

Note: OWASP updates its categories over time. The names shift, but the underlying themes stay stable: authorization bugs, insecure defaults, dangerous trust in inputs and dependencies, and weak detection. This post focuses on how these risks show up in real systems — and what “good” looks like in code and operations.

Theme What it looks like in production What “good” looks like
Trust boundaries App trusts the client, internal network, or “private” endpoints Server-side enforcement, zero-trust assumptions
Secure defaults Debug mode, permissive CORS, public buckets, weak headers Hardened baseline templates, environment gates
Integrity Malicious dependency update or compromised CI workflow Pinned builds, provenance checks, least privilege in CI
Resilience + detection Incidents go unnoticed; errors leak data; retries amplify outages Structured logs, alerts, safe error handling, timeouts
A useful mental model

Most breaches are not one bug. They’re a chain: misconfiguration creates a foothold, access control fails to contain it, and logging gaps delay detection. Fixing the links is how you prevent the chain.

Core concepts

Before we dive into the Top 10, align on a few concepts. These are the building blocks that turn “security advice” into repeatable engineering patterns.

Authentication vs authorization (and why teams mix them up)

Authentication answers “Who are you?” Authorization answers “What are you allowed to do?” Most real incidents happen when an app authenticates correctly — then authorizes incorrectly.

Term What it is Common failure
Authentication Proof of identity (passwords, MFA, SSO) Weak reset flows, token theft, brute force
Authorization Permission checks (roles, ownership, tenant) ID-based access bugs (“I can see someone else’s data”)
Session management How identity persists (cookies/tokens, expiry, rotation) Long-lived tokens, missing revocation, unsafe storage

Trust boundaries: where “internal” becomes dangerous

A trust boundary is any place your app accepts input or assumptions from something you don’t fully control: browsers, mobile clients, third-party webhooks, background jobs, internal microservices, even your own CI system. The OWASP Top 10 repeatedly boils down to trusting the wrong boundary.

Typical trust boundaries in web apps

  • Public APIs (browser/mobile)
  • Admin panels and internal tools
  • Webhooks from external services
  • File uploads and image processing
  • CI/CD pipelines and build artifacts

A practical rule

If a value can be influenced by a user, treat it as untrusted — even if it “came from our frontend” or “is only used by an internal service.”

Secure-by-default beats perfect-by-policy

Security programs fail when every team must remember 50 rules. They succeed when the default templates are safe: secure headers, strict CORS, hardened cookie settings, dependency pinning, and logging hooks already present. Your future self will thank you.

Most “security bugs” are missing patterns

If you find the same bug class twice (e.g., “endpoint forgot authorization”), treat it like a product issue: fix the pattern, not just the instance.

Step-by-step

Here’s a practical way to use the OWASP Top 10 on a real application: run a lightweight audit that maps your app’s attack surface, then walk through each category with “what fails” and “what good looks like”.

Step 1 — Inventory your attack surface (30 minutes)

  • List entry points: web, API, admin, webhook, mobile, partner integrations
  • List data assets: user profiles, billing, documents, internal dashboards
  • List “high-value” actions: password reset, role changes, payouts, exports
  • Write down trust boundaries: external services, internal microservices, CI/CD

This takes the conversation from “security is everything” to “security is these specific flows.”

Step 2 — Walk the Top 10 with real-life scenarios

Below are the Top 10 categories as they show up in real apps: what typically happens, what attackers leverage, and how to fix the root cause.

A01 — Broken Access Control

Real-life scenario: A logged-in user can access another user’s document/order/invoice by requesting it with a different ID. This often happens after “moving fast” with a UI that hides buttons — but the API never enforces ownership.

What breaks
  • Object-level access (ownership, tenant boundaries)
  • Function-level access (admin-only actions)
  • Mass assignment (client sets fields it shouldn’t)
  • Default-allow endpoints (“we’ll add auth later”)
What “good” looks like
  • Server-side authorization on every request
  • Central policy/middleware (not copy-pasted checks)
  • Deny-by-default and explicit allow rules
  • Security tests for tenant isolation
/**
 * Express example: object-level authorization (tenant + ownership).
 * Pattern: load resource, then authorize based on server-trusted fields.
 */
import express from "express";

const app = express();

// Pretend auth middleware sets req.user = { id, orgId, role }
function requireAuth(req, res, next) {
  if (!req.user) return res.status(401).json({ error: "unauthorized" });
  next();
}

function canReadInvoice(user, invoice) {
  // Deny-by-default; allow only explicit rules.
  if (user.role === "admin" && user.orgId === invoice.orgId) return true;
  return user.orgId === invoice.orgId && user.id === invoice.ownerId;
}

app.get("/api/invoices/:invoiceId", requireAuth, async (req, res) => {
  const invoice = await db.invoices.findById(req.params.invoiceId);
  if (!invoice) return res.status(404).json({ error: "not found" });

  if (!canReadInvoice(req.user, invoice)) {
    // Log on the server (not shown): userId, orgId, invoiceId, outcome=denied
    return res.status(403).json({ error: "forbidden" });
  }

  res.json({ invoice });
});

A02 — Security Misconfiguration

Real-life scenario: An app ships with a permissive CORS config, verbose error pages, open debug endpoints, public storage buckets, or “temporary” admin routes. These aren’t exotic — they’re operational footguns.

Hardening checklist you can automate
  • Separate dev/staging/prod configs; block prod startup if debug is enabled
  • Restrict CORS and disable wildcard credentials
  • Set security headers where appropriate (and test them)
  • Harden file upload settings (types, sizes, storage permissions)
  • Least privilege for service accounts and database roles

A03 — Software Supply Chain Failures

Real-life scenario: A dependency is compromised upstream, or a CI workflow is edited to run untrusted code with powerful tokens. Your app “gets hacked” even though your application logic is fine — because the build system is the new perimeter.

Where compromises happen
  • Typosquatted packages or malicious updates
  • CI actions pinned to mutable tags
  • Over-privileged CI tokens (write-all by default)
  • Unreviewed changes to build/release pipelines
Defensive moves that scale
  • Pin dependencies and CI actions to immutable versions
  • Require approvals for workflow changes
  • Generate and store an SBOM per release
  • Minimal permissions for CI; separate deploy credentials
# GitHub Actions example: reduce supply chain risk.
# - Pin actions to a commit SHA (immutable)
# - Use minimal permissions
# - Run dependency review on PRs
name: ci

on:
  pull_request:
  push:
    branches: [ "main" ]

permissions:
  contents: read

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11

      - name: Setup Node
        uses: actions/setup-node@1a4442e0c7b6a7fdc5c2ccda5c0c5d37a8e5cdb4
        with:
          node-version: "20"
          cache: "npm"

      - name: Install (locked)
        run: npm ci

      - name: Dependency review (PR only)
        if: github.event_name == 'pull_request'
        uses: actions/dependency-review-action@0ee6b24b0acb6c2f7c5bd1b7d83d6b8ad0e6d6f3

      - name: Test
        run: npm test

A04 — Cryptographic Failures

Real-life scenario: Sensitive data is stored or transmitted without the right protections. It might be passwords with weak hashing, tokens without expiry, secrets hardcoded in config, or “encryption” implemented incorrectly.

What “good crypto” looks like in teams
  • Use proven libraries and safe defaults (no custom crypto schemes)
  • Hash passwords with a modern password-hashing algorithm; never encrypt passwords
  • Protect keys: store in a secrets manager/KMS; rotate and scope access
  • Use TLS everywhere; avoid mixed content; enforce secure cookies
  • Minimize sensitive data collection and retention (the best “encryption”)

A05 — Injection

Real-life scenario: Untrusted input reaches a “sink” (database, template, command runner, query language) without a safe interface. Injection isn’t just SQL — it’s any place strings become “code.”

Common injection surfaces
  • SQL/NoSQL queries built via string concatenation
  • Search queries (Lucene/Elastic) and filter languages
  • Template rendering without escaping
  • OS command execution (image tools, converters)
Safe pattern
  • Parameterize queries (prepared statements)
  • Use allowlists for fields/operators when building dynamic filters
  • Escape output based on context (HTML vs JS vs URL)
  • Prefer libraries that separate data from code
"""
Python example: parameterized queries + safe errors.
- Use bind parameters (never string-build queries from user input).
- Return generic error messages; log detailed context server-side.
"""
import logging
import psycopg

log = logging.getLogger("app")

def get_order_for_user(conn: psycopg.Connection, user_id: str, order_id: str) -> dict | None:
  # Object-level authorization belongs in the query as well (defense-in-depth).
  sql = """
    SELECT id, user_id, total_cents, status
    FROM orders
    WHERE id = %s AND user_id = %s
  """
  with conn.cursor() as cur:
    cur.execute(sql, (order_id, user_id))
    row = cur.fetchone()
    if not row:
      return None
    return {"id": row[0], "user_id": row[1], "total_cents": row[2], "status": row[3]}

def handle_request(conn, user_id, order_id):
  try:
    order = get_order_for_user(conn, user_id, order_id)
    if order is None:
      return 404, {"error": "not found"}
    return 200, {"order": order}
  except Exception as exc:
    # Log with structured context; avoid leaking internals to clients.
    log.exception("order_lookup_failed", extra={"user_id": user_id, "order_id": order_id})
    return 500, {"error": "internal error"}

A06 — Insecure Design

Real-life scenario: The code is “correct,” but the system’s rules are unsafe. Examples: unlimited password reset attempts, no abuse controls, no tenant isolation concept, or workflows where a low-privilege user can trigger high-impact actions indirectly.

Design questions to ask early
  • What are the “high value” actions and who should be able to do them?
  • What happens if an attacker automates this endpoint 10,000 times?
  • What is the safe behavior when the system is unsure (fallback)?
  • Do we have a consistent model of tenants, roles, and ownership?

A07 — Authentication Failures

Real-life scenario: Account takeover via weak login protections, unsafe password reset, missing MFA for admins, token reuse, or sessions that don’t expire.

What to harden
  • Rate limits on login and reset flows
  • MFA for admins and sensitive operations
  • Short-lived sessions + rotation on sensitive events
  • Secure cookie settings and CSRF protections where needed
Usable security tip

Make the secure path the easy path: passkeys/MFA prompts that are predictable, and session expiry that’s visible to users. Confusing auth UX creates “workarounds” that become security bugs.

A08 — Software or Data Integrity Failures

Real-life scenario: The app accepts data or updates that aren’t verified: unsigned webhooks, tampered files, insecure deserialization, untrusted update channels, or “trust me” admin imports.

Defensive integrity checks
  • Verify webhook signatures and timestamps (reject replays)
  • Validate uploaded files beyond extension (content-type + scanning)
  • Use signed artifacts and verified updates for releases
  • Prefer safe serialization formats; avoid executing data as code

A09 — Security Logging & Alerting Failures

Real-life scenario: An attacker tries dozens of auth flows, hits unusual endpoints, or probes tenant boundaries — and nobody sees it. Logging exists, but it’s unstructured, missing key events, or not tied to alerts.

Events worth logging (minimum)
  • Login success/failure, password reset requests
  • Privilege/role changes, admin actions
  • High-value actions (exports, payouts, deletions)
  • Authorization failures (403) with user + resource identifiers
Alerts worth having (minimum)
  • Spikes in auth failures or 403/404 anomalies
  • New admin creation or permission escalations
  • Sudden increase in error rates (5xx) or latency
  • Suspicious access from new geos/devices (if applicable)

A10 — Mishandling of Exceptional Conditions

Real-life scenario: When something goes wrong (timeouts, nulls, unexpected input, third-party failures), the app leaks details, fails open, retries in a loop, or crashes in ways that expose data or create an availability incident.

Make failure safe
  • Return generic errors to clients; keep details in logs
  • Fail closed on authorization and money-moving operations
  • Add timeouts, circuit breakers, and bounded retries
  • Handle null/missing cases deliberately (don’t “assume happy path”)

Exceptional conditions aren’t “edge cases” — they’re where attackers and outages live.

Step 3 — Turn findings into patterns (not a bug list)

The fastest way to improve security is to convert repeated problems into reusable patterns: one authorization helper, one secure config baseline, one logging policy, one dependency/CI policy.

A practical workflow

Fix one endpoint, then immediately generalize it into a helper/middleware/template. That’s how “security work” becomes “less work next week.”

Common mistakes

These are the “we didn’t think that mattered” mistakes that repeatedly show up in real incidents. Each has a simple fix you can standardize.

Mistake 1 — “The UI hides it, so it’s safe”

Client-side checks are for UX. Attackers don’t use your UI; they use your API.

  • Fix: server-side authorization on every request.
  • Fix: test tenant isolation with automated checks (happy-path tests won’t catch it).

Mistake 2 — Default-allow access control

“We’ll add roles later” becomes “we shipped a data leak.” Access control must be explicit.

  • Fix: deny-by-default policies.
  • Fix: centralize authorization decisions (middleware/policy layer).

Mistake 3 — Shipping with insecure configuration “temporarily”

Debug endpoints, permissive CORS, and open admin routes rarely get removed — they get forgotten.

  • Fix: production start-up guards (fail to boot if debug is on).
  • Fix: baseline hardening templates and environment-specific config.

Mistake 4 — Treating dependencies and CI as “someone else’s problem”

Modern compromises often enter through a package, pipeline, or token — not through your controller code.

  • Fix: pin CI actions and dependencies to immutable versions.
  • Fix: least privilege CI permissions and protected workflow changes.

Mistake 5 — Logging everything (including secrets)

Over-logging creates a new breach surface: logs become a shadow database.

  • Fix: redact secrets and tokens by default.
  • Fix: log events and identifiers, not raw payloads.

Mistake 6 — Error handling that leaks internals or fails open

Stack traces and “helpful” errors can reveal sensitive details. Failing open can turn a bug into a breach.

  • Fix: generic client errors + detailed server logs.
  • Fix: fail closed for authorization and money/data moving operations.
The fastest improvement loop

When you fix a mistake once, turn it into a reusable pattern. If you don’t, it will come back as “a different bug” next sprint.

FAQ

Is the OWASP Top 10 a checklist for pen testing?

Not exactly. It’s primarily an awareness and prioritization list. You can use it to guide testing, but the highest leverage use is to build patterns and defaults so you stop re-introducing the same classes of bugs.

What’s the difference between “Broken Access Control” and “Authentication Failures”?

Authentication is proving identity (login, MFA, tokens). Access control is enforcing permissions after identity is known. In practice, apps often authenticate correctly and still leak data because authorization is missing or inconsistent.

How do I prioritize fixes when everything feels risky?

Start where breaches scale: (1) access control on data endpoints, (2) misconfiguration + secrets exposure, (3) supply chain + CI permissions, then (4) logging/alerting and safe error handling. These reduce both likelihood and blast radius quickly.

Do I need expensive tools to address the Top 10?

No. Good security is mostly about patterns: deny-by-default authorization, parameterized queries, pinned builds, secure configuration templates, and basic monitoring. Tools help, but they don’t replace the fundamentals.

What does “Software or Data Integrity Failures” mean in plain English?

It means you accepted something you shouldn’t trust. Unsigned webhooks, tampered files, unsafe imports, or unverified updates. The fix is verification: signatures, checksums, provenance, and strict validation.

How do I avoid leaking sensitive info in logs while still being able to debug?

Log events and identifiers, not raw secrets. Use structured logs with request IDs, user IDs, and resource IDs, redact tokens automatically, and move detailed payload inspection behind secure, temporary debugging workflows.

Cheatsheet

Keep this near your PR reviews. If your team consistently answers these items, you’ll eliminate most common web app compromises.

Build-time (supply chain + defaults)

  • Pin dependencies and CI actions to immutable versions
  • Restrict CI permissions; protect workflow edits
  • Generate an SBOM per release; keep it with artifacts
  • Secrets come from a manager/KMS, not the repo
  • Security baseline templates for headers, cookies, CORS

Runtime (auth + input + integrity)

  • Authorization on every endpoint (object + function level)
  • Deny-by-default policies; centralize checks
  • Parameterized queries; allowlists for dynamic filters
  • Verify signatures for webhooks/imports; validate uploads
  • Strong auth protections: rate limits, MFA for sensitive ops

Operations (detection + safe failure)

Area Minimum standard Why it matters
Logging Auth events, privilege changes, high-value actions, 403 spikes Detects probing and real attacks early
Alerting Error/latency spikes, unusual auth patterns, new admins Shortens “time to know” (critical in incidents)
Error handling Generic client errors + detailed server logs Prevents data leaks and reduces attacker intel
Resilience Timeouts, bounded retries, circuit breakers Stops outages from turning into security incidents
PR review shortcut

Ask two questions on any endpoint change: (1) “Where is authorization enforced?” and (2) “What happens on failure?” If either answer is unclear, you just found the next incident.

Wrap-up

The OWASP Top 10 becomes powerful when you treat it as engineering guidance, not a poster. The “real life” version is simple: enforce authorization consistently, remove insecure defaults, harden the supply chain, validate untrusted inputs, and build detection + safe failure into your system.

A 7-day hardening plan (small but real)

  • Day 1: Audit 10 most-used endpoints for object-level authorization
  • Day 2: Lock down prod config (debug off, CORS strict, cookie flags)
  • Day 3: Pin CI actions/dependencies; restrict CI permissions
  • Day 4: Review password reset + admin flows; add rate limits/MFA
  • Day 5: Add webhook/file integrity checks and validation
  • Day 6: Add structured logging for auth + high-value actions
  • Day 7: Improve exceptional handling: timeouts, bounded retries, generic errors
Next actions

If you want to go deeper, pair this post with a lightweight threat model and a few “security PR checklist” rules. The win is not perfect security — it’s fewer surprise incidents and smaller blast radius when something breaks.

Quiz

Quick self-check (demo). This quiz is auto-generated for cyber / security / web.

1) Which is the most reliable fix for Broken Access Control?
2) What’s a strong mitigation for Software Supply Chain Failures?
3) What is the safest general approach to prevent Injection?
4) What’s the best practice for handling exceptional conditions in production?