“Security” doesn’t have to mean a giant rewrite. For most mobile apps, a few defaults deliver outsized protection: secure storage (so secrets don’t leak from disk), TLS done right (so the network can’t be spoofed), and jailbreak/root signals (so you can reduce risk when the device environment is compromised). This guide focuses on the high-impact basics you can adopt in a day—then improve over time.
Quickstart
If you’re short on time, implement these in order. Each step reduces a common, real-world attack path with minimal complexity. The goal is not “unhackable” — it’s “hard enough that simple attacks fail, and risky conditions are handled safely.”
1) Do a 15-minute “secrets audit”
- Search for tokens, API keys, passwords in UserDefaults, SharedPreferences, JSON files, logs
- List what you store: access token, refresh token, session cookie, PII, cached responses
- Decide: what can be cached, what must be protected, what should not be stored at all
2) Move secrets to the platform vault
- iOS: Keychain + restrictive accessibility (“This device only”)
- Android: Keystore-backed encryption (e.g., Jetpack Security)
- Store the minimum needed (prefer short-lived access tokens)
3) Enforce HTTPS and stop “trust all”
- Block cleartext traffic (Android config) and avoid ATS exceptions (iOS)
- Remove any custom TrustManager/SSL code that “accepts everything”
- Fail closed: if cert validation fails, do not proceed
4) Add jailbreak/root detection as a risk signal
- Implement simple checks (files, write access, debuggability)
- Treat results as signals, not proof (bypasses exist)
- Pick a response: re-auth, disable sensitive flows, or block high-risk actions
Add a “no secrets in logs” rule. Most accidental leaks happen because an error handler prints headers, tokens, or full request bodies during debugging—and it stays in production longer than anyone expects.
Overview
Mobile app security basics are easier to reason about if you split the world into three surfaces: data at rest (on device), data in transit (network), and device integrity (is the OS environment trustworthy?). This post covers a practical baseline for each:
What you’ll implement (or verify)
| Area | Baseline goal | Typical failure |
|---|---|---|
| Secure storage | Secrets are stored in Keychain/Keystore (or encrypted with keys kept there) | Tokens in plaintext preferences or files |
| TLS | HTTPS only, correct cert validation, safe defaults, optional pinning where needed | “Trust all certificates” or debugging proxies left enabled |
| Jailbreak/root signals | Detect risky environments and reduce impact of compromise | Assuming integrity checks are unbypassable |
You’ll also see how to think about trade-offs: certificate pinning reduces certain attacks but increases operational risk if you don’t plan rotations; root detection can reduce fraud but also harms legitimate users with custom ROMs; storing less data is often stronger than encrypting more data.
Assume an attacker can: read your app’s local storage on a compromised device, observe networks on hostile Wi-Fi, and run tools that hook or debug your app. Your baseline should make those attacks expensive, not effortless.
Core concepts
1) Secrets vs data: not everything deserves vault storage
The fastest way to improve mobile app security is to classify what you store: secrets (tokens/keys), sensitive user data (PII), and cacheable data (things you can re-fetch). Secrets should live in platform-protected storage; sensitive data should be minimized and protected; cacheable data should be treated as disposable.
A practical storage decision rule
- If exposure enables account takeover → treat as a secret (vault or encrypted)
- If exposure harms user privacy → minimize + protect + expire
- If you can re-fetch it safely → store it as a cache with short TTL (or don’t store it)
2) Encryption is a system, not a function call
“We encrypt it” isn’t enough. Secure storage depends on where keys live, when they unlock, whether backups include data, and what happens when the device is compromised. A strong baseline uses the platform vault to keep key material out of files.
iOS mental model
Keychain entries can be configured with accessibility constraints (e.g., available only when the device is unlocked) and “this device only” options that prevent restoration onto another device via backups.
Android mental model
The Keystore protects key material; you typically encrypt data using keys stored in Keystore, then store ciphertext in preferences/files. Jetpack Security wraps this pattern for many common cases.
3) TLS: “works” is not the same as “secure”
A request over HTTPS can still be insecure if certificate validation is broken, if you allow cleartext fallbacks, or if you accept invalid certs “temporarily.” The baseline is: HTTPS only, default validation intact, and careful use of pinning.
Any code path that disables certificate validation turns TLS into encryption without identity. Attackers love this because it enables silent man-in-the-middle interception on hostile networks.
4) Jailbreak/root detection: signal, not certainty
If the device is rooted/jailbroken, many protections can be bypassed. Integrity checks are still useful—especially for fraud prevention— but only when you design the response as a risk-based policy: “What do we allow, what do we step-up, and what do we block?”
Good responses to a high-risk device signal
- Require re-authentication (shorten session)
- Disable high-risk actions (payouts, changing security settings)
- Hide or remove local caching of sensitive data
- Increase telemetry / fraud checks (within privacy rules)
Step-by-step
This section is a practical implementation path. You can adopt it incrementally: start with storage, then lock down TLS, then add integrity signals and a policy for what to do when risk is high.
Step 1 — Decide what you store (and what you should stop storing)
Before you code, reduce the amount of sensitive material your app holds. This is the “security win” most teams skip. For example: keep access tokens short-lived, store refresh tokens only when necessary, and avoid caching full API responses that contain PII when a partial cache would do.
Mini checklist: storage inventory
- List every persistent storage location (preferences, files, DB, cache, Keychain/Keystore)
- Mark items as: secret / sensitive / cache
- Delete what you can (especially debug leftovers)
- Set TTL/expiration for caches that might include user data
Step 2 — Store secrets in Keychain (iOS) with restrictive defaults
On iOS, Keychain is the default place for credentials and tokens. The key choices are: accessibility (when the entry is available) and device binding (whether it can move via backup/restore). A common secure default for session tokens is “available when unlocked” and “this device only.”
import Foundation
import Security
enum KeychainError: Error { case unexpectedStatus(OSStatus) }
final class KeychainStore {
private let service = "com.example.myapp"
func saveToken(_ token: String, account: String = "session_token") throws {
let data = Data(token.utf8)
// Delete existing item first (idempotent save).
let queryDelete: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account
]
SecItemDelete(queryDelete as CFDictionary)
// Add a new item with restrictive accessibility:
// - WhenUnlocked: only accessible while device is unlocked
// - ThisDeviceOnly: not migrated via iTunes/iCloud backups to another device
let queryAdd: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account,
kSecValueData as String: data,
kSecAttrAccessible as String: kSecAttrAccessibleWhenUnlockedThisDeviceOnly
]
let status = SecItemAdd(queryAdd as CFDictionary, nil)
guard status == errSecSuccess else { throw KeychainError.unexpectedStatus(status) }
}
func loadToken(account: String = "session_token") throws -> String? {
let query: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account,
kSecReturnData as String: true,
kSecMatchLimit as String: kSecMatchLimitOne
]
var item: CFTypeRef?
let status = SecItemCopyMatching(query as CFDictionary, &item)
if status == errSecItemNotFound { return nil }
guard status == errSecSuccess else { throw KeychainError.unexpectedStatus(status) }
guard let data = item as? Data else { return nil }
return String(data: data, encoding: .utf8)
}
func deleteToken(account: String = "session_token") {
let query: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account
]
SecItemDelete(query as CFDictionary)
}
}
- Backups: use “ThisDeviceOnly” when restoring tokens onto another device would be dangerous.
- Accessibility: “AfterFirstUnlock” can be useful for background refresh, but it increases exposure surface.
- Sharing: avoid Keychain access groups unless you truly need them (more surface area).
Step 3 — Lock down TLS and consider pinning for high-risk APIs (Android example)
The baseline is: use the OS defaults, rely on proper certificate validation, and ensure you never accept invalid certs. Pinning is optional: it can reduce some MITM risk, but it must be managed (rotation, backup pins, and failure behavior). If you pin, prefer pinning a public key hash (SPKI) and ship at least one backup pin for rotation.
import okhttp3.CertificatePinner
import okhttp3.OkHttpClient
import okhttp3.Request
// Pin the public key (SPKI) hash for your domain.
// Include at least one backup pin to allow certificate rotation.
val certificatePinner = CertificatePinner.Builder()
.add("api.example.com", "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=")
.add("api.example.com", "sha256/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=") // backup
.build()
val client = OkHttpClient.Builder()
.certificatePinner(certificatePinner)
.build()
fun fetchProfileJson(): String {
val request = Request.Builder()
.url("https://api.example.com/v1/profile")
.get()
.build()
client.newCall(request).execute().use { response ->
if (!response.isSuccessful) throw IllegalStateException("HTTP ${response.code}")
return response.body?.string() ?: ""
}
}
If you deploy pinning without a rotation plan, you can lock out every user when certificates change. Mitigation: ship backup pins, monitor failures, and plan how you’ll update the app before pins expire.
Step 4 — Verify your server TLS posture (quick sanity checks)
Mobile app TLS is only as strong as the server configuration. You don’t need a perfect TLS lab; you need to catch the obvious failures: weak protocol versions, wrong certificate chain, or unexpected redirects to HTTP. Run quick checks against your production endpoint during release reviews.
# Replace with your real host:
HOST="api.example.com"
# 1) Confirm the server completes a TLS 1.2 handshake (or newer).
openssl s_client -connect "${HOST}:443" -tls1_2 -servername "${HOST}" < /dev/null 2>/dev/null | grep -E "Protocol|Cipher|Verify return code"
# 2) Make sure there is no HTTP redirect or cleartext endpoint accidentally enabled.
curl -sS -o /dev/null -w "https_status=%{http_code} redirect_url=%{redirect_url}\n" "https://${HOST}/health"
# 3) If you use HSTS (web clients), ensure it’s present (not required for native apps, but good hygiene).
curl -sSI "https://${HOST}/" | grep -i "strict-transport-security" || echo "HSTS header not found"
Step 5 — Add jailbreak/root signals and decide policy
Integrity checks are not about “detecting every attacker.” They’re about avoiding the worst outcomes on compromised devices. Start with lightweight signals (debuggable flag, suspicious file paths, ability to write where you shouldn’t, hooking frameworks), and translate signals into a policy that matches your product risk.
Signals you can implement quickly
- Is the app debuggable in a release build?
- Do known jailbreak/root file paths exist?
- Can the app write to restricted locations?
- Is a debugger attached / suspicious dynamic libraries loaded?
Translate signals into behavior
- Low risk: warn the user, log telemetry
- Medium risk: force re-auth, disable offline cache
- High risk: block high-value actions (payments, credential changes)
- Critical: refuse to run only if your use case truly requires it
Your integrity policy should be explainable: “We’re protecting your account and preventing fraud.” If the app blocks users with no explanation, they’ll churn or find workarounds.
Common mistakes
These are the patterns behind “we use HTTPS, so we’re secure” and “we encrypted it, so we’re done.” Most of them happen because a debug convenience accidentally becomes a production behavior.
Mistake 1 — Storing tokens in plaintext preferences
UserDefaults / SharedPreferences are not vaults. If an attacker can read storage, they get the session.
- Fix: store tokens in Keychain/Keystore (or encrypt with keys stored there).
- Fix: store less: keep access tokens short-lived and avoid “forever” sessions.
Mistake 2 — “Trust all certificates” for testing
This breaks the identity check of TLS. If it ships, a MITM can silently intercept traffic.
- Fix: remove the code path completely; don’t gate it behind a config flag.
- Fix: use proper dev certificates, local DNS, or a test environment with real TLS.
Mistake 3 — Allowing cleartext fallback
One insecure endpoint can become the weakest link, especially if it carries tokens or session cookies.
- Fix: block cleartext traffic on Android and avoid ATS exceptions on iOS.
- Fix: add tests that fail builds if HTTP URLs appear in code or config.
Mistake 4 — Pinning without a rotation plan
Pinning can improve security, but operational failure can become an outage.
- Fix: ship backup pins; monitor pin failures.
- Fix: prefer public key pinning, and plan certificate renewal timelines.
Mistake 5 — Logging secrets “temporarily”
Logs survive longer than you think (crash reports, analytics, support screenshots).
- Fix: redact headers and bodies; never print Authorization tokens.
- Fix: add a lint/check in CI for common secret patterns.
Mistake 6 — Treating jailbreak/root detection as a guarantee
Attackers can bypass checks. Use signals to reduce risk, not to declare victory.
- Fix: implement multiple independent checks and score risk.
- Fix: pair with server-side protections (rate limits, anomaly detection, step-up auth).
Before each release, ask: “What changed that could affect storage, network validation, or integrity?” This simple review catches most accidental regressions (debug flags, new endpoints, new caches).
FAQ
Should I use certificate pinning for every mobile app?
Not always. Pinning can reduce some man-in-the-middle risk, but it introduces operational risk (certificate rotation and outages). Use pinning for high-risk apps (financial, healthcare, admin tools) or high-value endpoints, and ship backup pins with a rotation plan. For many apps, correct default TLS validation + HTTPS-only + no cleartext fallback is already a strong baseline.
Where should I store access tokens and refresh tokens?
Store tokens in platform-protected storage: Keychain (iOS) and Keystore-backed encryption (Android). Prefer short-lived access tokens; store refresh tokens only if you need persistent sessions. If your app can function without a refresh token (e.g., re-auth on launch), that’s often safer.
Is “encrypting SharedPreferences/UserDefaults” enough?
It depends on where the encryption key lives. If the key is hardcoded or stored next to the ciphertext, it’s not a real boundary. The robust approach is: keep key material protected by Keychain/Keystore, then store ciphertext elsewhere. (Many platform libraries implement this pattern for you.)
What should my app do when it detects jailbreak/root?
Treat it as a risk signal. Common responses include: forcing re-authentication, disabling high-risk actions, reducing offline caching, or adding step-up verification. Blocking the entire app is usually a last resort unless your domain demands it.
Can I stop reverse engineering and hooking completely?
You can raise the cost, but you can’t guarantee prevention. Focus on reducing the impact: keep secrets out of code, validate critical actions on the server, use integrity signals, and ensure your app fails safely when the environment is compromised.
What’s the simplest way to avoid “oops, we shipped debug security”?
Add a release checklist: ban “trust all” networking code, block cleartext traffic, ensure logging is redacted, and run an end-to-end network test against production endpoints before signing builds. Small, strict checks beat big, rarely-used policies.
Cheatsheet
A scan-fast checklist for mobile app security basics: storage, TLS, and jailbreak signals.
Storage
- Store secrets in Keychain (iOS) / Keystore-backed encryption (Android)
- Use restrictive access: “when unlocked” + “this device only” where appropriate
- Store less: short-lived access tokens, minimal PII, caches with TTL
- Never log tokens, auth headers, or full request bodies containing secrets
- Clear sensitive caches on logout and when risk is high
TLS / Network
- HTTPS only; block cleartext traffic
- Do not ship custom trust managers that accept invalid certs
- Fail closed on TLS errors
- Consider pinning only with rotation plan + backup pins
- Verify server TLS posture during release (handshake + redirects)
Jailbreak/root signals
- Implement multiple independent checks (debuggable, suspicious files, write access, debugger attached)
- Use a risk policy (warn / step-up / restrict / block)
- Prefer server-side validation for high-value actions
- Expect bypasses; treat signals as one input, not the decision
Move secrets into Keychain/Keystore and remove any “trust all” TLS code paths. Those two fixes prevent the most common, easiest-to-exploit leaks.
Wrap-up
Mobile app security basics are mostly about strong defaults and removing footguns: store secrets safely, keep TLS validation intact, and treat jailbreak/root as a risk signal. You don’t need a perfect system to get real benefits—you need a consistent baseline and a habit of preventing regressions.
Your next actions (pick one)
- Run a “secrets audit” and migrate the top 3 items into Keychain/Keystore
- Search for any TLS overrides and delete “trust all” code paths
- Define a simple integrity policy (what happens if risk is high?) and implement the first signal
- Add a release checklist to keep these defaults from drifting
If you want to go deeper, check the related posts below for practical guidance on deep links, app architecture, and performance profiling— all of which can indirectly affect security through correctness and stability.
Quiz
Quick self-check (demo). This quiz is auto-generated for mobile / development / mobile.