Hardware, IoT & Embedded · Power

Power Optimization in IoT: Sleep Modes and Battery Math

Extend battery life from days to months with smart defaults.

Reading time: ~8–12 min
Level: All levels
Updated:

Most IoT devices don’t die because the battery is “small” — they die because something stays on when it shouldn’t. This guide shows how to get battery life from days to months (and sometimes years) by combining two ideas: sleep modes (make “off” the default) and battery math (know what your firmware can afford). You’ll learn a simple power budget workflow, common pitfalls, and practical patterns you can apply on any MCU + radio stack.


Quickstart

If you only do a few things, do these. They’re the highest-impact defaults for power optimization in IoT: reduce the time you’re awake, reduce what’s on while awake, and verify the numbers with a meter.

1) Write a one-line power goal

You need a target to make tradeoffs. Pick a battery, an expected life, and a reporting cadence.

  • Battery: e.g., CR2032 (coin), 2×AA, LiPo, Li-SOCl2
  • Life: e.g., 12 months
  • Reporting: e.g., every 5 minutes, plus event triggers

2) Measure baseline sleep current (before coding)

“Deep sleep” doesn’t matter if your board leaks 500 µA. Find the floor first, then optimize firmware.

  • Disable LEDs / USB-UART / debug probes
  • Put the MCU in its lowest-power mode
  • Measure board current at the battery input

3) Duty-cycle everything (CPU, sensors, radio)

Most IoT devices should sleep >99% of the time.

  • Wake on timer or interrupt, do work fast, sleep again
  • Power-gate sensors between reads
  • Batch data and transmit less often

4) Audit the “always-on” list

These small currents silently destroy battery life.

  • LDO quiescent current (Iq)
  • Sensor idle currents
  • Pull-ups / floating GPIO / leakage through protection diodes
  • Radio standby (especially in “connected” modes)

5) Do the minimum viable battery math

You don’t need perfect electrochemistry. You need a realistic estimate that includes sleep current, radio bursts, regulator losses, and some derating.

What to estimate Typical impact Action
Sleep current (µA) Dominates long-life devices Fix hardware leakage + Iq first
Active bursts (mA for ms–s) Dominates chatty devices Shorten work + batch radio
Peak current (mA) Causes brownouts/retries Add capacitors, check battery IR
Regulator efficiency (%) Hidden loss across every state Pick DC/DC where it helps, mind Iq
A practical “good enough” target

For “months to a year” battery life, aim for single-digit µA sleep current on the complete board, and keep radio transmissions short and infrequent. If you can’t get there, the fix is often hardware (Iq/leakage), not firmware.

Overview

Power optimization in IoT is mostly about time. You’re either awake and spending milliamps, or asleep and spending microamps. The easiest way to win is to spend almost all time in the cheapest state.

What this post covers

  • The mental model: average current and “energy per event”
  • Sleep modes: what they actually turn off (and what they don’t)
  • Battery math: realistic lifetime estimates + common deratings
  • A step-by-step workflow: measure → budget → optimize → verify
  • Pitfalls: leakage paths, Iq traps, peak currents, retries

Who this is for

Anyone building battery-powered sensors, BLE beacons, LoRaWAN nodes, Wi-Fi devices that “mostly sleep,” or any embedded device where “it works” isn’t the same as “it lasts.”

  • New to low-power: you’ll get a dependable checklist
  • Intermediate: you’ll learn where the numbers usually lie
  • Teams shipping devices: you’ll get a repeatable power budget workflow
The core idea

Don’t chase “low power” as a vibe. Chase a power budget: how many microamp-hours (µAh) you can spend per hour (or per event) and still hit your battery-life goal.

Core concepts

1) Duty cycle: the lever that matters most

Duty cycle is simply “how long you’re awake.” If you draw 30 mA while active but only for 100 ms each minute, your average current is small. If you draw 200 µA in “sleep” forever, your average current may be bigger than you think.

Average current (the whole game)

If your device has repeating states, approximate lifetime with: Iavg = Σ (Istate × timestate) / period and Life ≈ BatteryCapacity / Iavg.

State Current Time Contribution
Deep sleep 5 µA 59.8 s ~4.98 µA avg
Sense + compute 8 mA 150 ms ~20 µA avg
Radio TX 40 mA 50 ms ~33 µA avg

In this example, the short radio burst dominates average current more than the long sleep state—because it’s expensive. That’s why “sleep mode” alone isn’t a plan; you also need to optimize “wake work.”

2) Sleep modes: “what stays on” is the fine print

Every MCU has multiple low-power modes. The names vary, but the pattern is consistent: the deeper the sleep, the more is turned off (CPU clock, peripherals, RAM blocks), and the fewer wake sources remain.

Common sleep mode levels

  • Idle / light sleep: CPU off, peripherals may keep running
  • Stop / standby: most clocks off, memory retention optional
  • Deep sleep: minimal always-on domain + RTC + wake logic
  • Shipping mode: near-zero (often needs a user action to wake)

Wake sources you’ll actually use

  • RTC timer: periodic reporting, time-based sampling
  • GPIO interrupt: button, reed switch, motion detect
  • Comparator / ADC threshold: low battery / analog event
  • Radio interrupt: rarely (connected modes can be power expensive)
Deep sleep can still burn power

Your MCU might be at 2 µA in deep sleep, but your board might be at 200 µA because of: LDO Iq, sensor idle current, a pull-up network, or leakage through a pin. Always measure at the battery input.

3) Energy per event: budget in packets, not hours

For many IoT devices the power draw is spiky: wake, sample, compute, transmit, sleep. It’s often easier to budget µAh per event (a sensor read, a packet sent, a BLE connection) than to think in continuous currents.

Quick conversions you’ll use constantly

  • mA × seconds → µAh: µAh = (mA × s) / 3.6
  • µA average → mAh/day: mAh/day = (µA × 24) / 1000
  • Battery life (days): days ≈ (mAh / µA) × (1000 / 24)

Example: a 40 mA TX for 50 ms costs ~0.56 µAh. Do it 10,000 times and it’s ~5.6 mAh. Suddenly “tiny bursts” aren’t tiny anymore.

4) Battery reality: capacity is not a single number

Battery datasheets quote capacity under specific conditions (load, temperature, cutoff voltage). In real products: cold reduces capacity, high pulses reduce usable capacity, and some chemistries have higher self-discharge than others.

Battery and power-path choices (high-level)

Choice Why people use it Power gotcha to watch
Coin cell (CR2032) Small, cheap, great for low average current Limited peak current; voltage droop can cause retries
AA/AAA (alkaline) Easy sourcing, decent capacity Voltage varies widely; cold hurts
LiPo High peak current, rechargeable Needs charging + protection; self-discharge matters
Li-SOCl2 Very high energy density, long shelf life Pulse current needs buffering (capacitors)
LDO Simple, low noise Iq can dominate sleep; burns power when Vin ≫ Vout
DC/DC buck Higher efficiency under load Some have higher Iq; efficiency at light load varies

5) Peak current and retries: the hidden battery killer

Wireless stacks draw large peaks (TX, RX, join/handshake). If the battery and power path can’t supply those peaks, the voltage dips, the radio retries, and you burn more energy than you would have if you had designed for the peak. This is why “it connects on USB” but “fails on battery” is such a classic bug.

Power optimization is reliability optimization

A stable power path (good regulator choice + adequate decoupling + sane peak handling) reduces brownouts and retries. Fewer retries means fewer milliseconds of radio time — and that directly translates to longer battery life.

Step-by-step

This is a workflow you can run on almost any device: BLE, Wi-Fi, LoRaWAN, Zigbee, sub-GHz, proprietary radios. The goal is repeatability: you should be able to change a feature and predict how it affects battery life.

Step 1 — Turn “battery life” into a power budget

Start from the battery and work backward. Pick a conservative usable capacity (not the marketing number), then compute the average current you can afford.

Budget recipe

  • Pick a battery capacity (mAh) and apply a safety factor (e.g., 0.7–0.85)
  • Decide lifetime (days)
  • Compute allowable average current: Iavg ≈ (usable mAh × 1000) / (days × 24)
  • Allocate the budget: sleep, sensing, radio, “always-on” overhead

Example: usable 1800 mAh from 2×AA, target 365 days → Iavg ≈ (1800×1000)/(365×24) ≈ 205 µA. That’s the entire device average. Sleep current is often a meaningful fraction.

Step 2 — Measure your current profile (don’t guess)

Measure at least three numbers early: sleep current, active current, and radio peak. If you can, capture a current waveform (even a rough one) — it reveals where time is being spent.

Minimum measurement checklist

  • Measure at the battery input (not at a convenient test point)
  • Measure sleep with the radio off and sensors power-gated
  • Measure peak during TX/RX/join/handshake
  • Measure “typical wake cycle” duration (ms)

Common measurement traps

  • USB power hides regulator losses and peak droops
  • Multimeters can miss fast peaks (average looks “fine”)
  • Dev boards have extra parts that leak power (LEDs, converters)
  • Debug probes keep the MCU out of deep sleep

Step 3 — Make deep sleep the default state

The simplest power architecture in firmware is a state machine: wake → do the smallest possible work → schedule the next wake → sleep. Everything else is an exception.

Firmware pattern: wake, work, sleep (with explicit shutdown)

This is intentionally “generic C-ish pseudocode” you can adapt to your MCU/SDK (ESP-IDF, Zephyr, STM32 HAL, bare metal). The important part is not the API names — it’s the order and the fact that shutdown is explicit.

// Power-optimized IoT loop (generic pseudocode)
#include <stdint.h>
#include <stdbool.h>

static void shutdown_peripherals(void) {
  // 1) Stop periodic timers / SysTick that keep clocks running
  stop_all_timers();

  // 2) Put radios into OFF (not "standby") and disable RF front-end if any
  radio_off();

  // 3) Power-gate sensors and pull pins to a known low-leakage state
  gpio_set(SENSOR_EN, 0);
  gpio_config_low_leakage_all_pins();

  // 4) Disable unused peripherals (I2C/SPI/UART/ADC) and their clocks
  disable_peripheral_clocks();

  // 5) Optional: reduce retention (if your platform allows selecting RAM blocks)
  configure_retention_minimum();
}

static void do_wake_work(void) {
  // Keep the awake window short and predictable.
  gpio_set(SENSOR_EN, 1);
  delay_ms(3); // sensor settle (keep minimal)

  sample_t s = read_sensor_once();
  gpio_set(SENSOR_EN, 0);

  // Only transmit when needed (thresholding / batching)
  bool should_send = should_report(s);
  if (should_send) {
    radio_on();
    // Keep payload small; avoid expensive handshakes if your protocol permits
    send_packet_compact(s);
    radio_off();
  }
}

int main(void) {
  boot_init_minimal_clocks();

  while (1) {
    // Decide why we woke up (timer vs external interrupt)
    wake_reason_t r = get_wake_reason();

    // Do the smallest useful work
    do_wake_work();

    // Prepare the next wake (timer) and then go deep
    uint32_t next_wake_s = compute_next_wake_seconds(r);
    shutdown_peripherals();
    schedule_rtc_wakeup_seconds(next_wake_s);

    enter_deep_sleep(); // execution resumes here after wake
  }
}
GPIOs can leak more than you think

Floating pins, wrong pull directions, or powering a sensor through an IO pin can create continuous leakage. “Low power mode” won’t fix that. Make pin states explicit before sleep.

Step 4 — Optimize the radio (it’s often the biggest spike)

Radios are expensive because they involve high-frequency clocks, analog front-ends, and protocol overhead. The “power optimization” strategy is mostly “spend fewer milliseconds with RF on.”

High-leverage radio tactics

  • Batch: send once per N samples instead of every sample
  • Compress: smaller payloads mean shorter airtime
  • Backoff: avoid rapid retries; log and try later
  • Adaptive reporting: send less often when stable
  • Prefer uplink-only where possible: downlinks keep RX windows open

Power pitfalls that look like “RF bugs”

  • Join/handshake loops on low battery (retries burn energy)
  • Voltage droop during TX causing resets
  • Staying in “connected” modes that require frequent listening
  • Too aggressive TX power for your range (wasted mA)

Step 5 — Fix the hardware floor: Iq, leakage, and the power path

Firmware can’t overcome a bad hardware idle. If your sleep current is high, you must audit the board: regulators, sensors, pull-ups, level shifters, ESD networks, and anything connected to the battery.

Hardware audit checklist (most common wins)

  • Regulator Iq: compare it to your sleep target (Iq should be “small” relative to target)
  • Sensor idle: some sensors draw mA unless explicitly put in sleep/power gated
  • LEDs: even “tiny” indicator LEDs are catastrophic for long-life devices
  • Pull-ups: large networks can burn tens of µA continuously
  • Leakage paths: through protection diodes, level shifters, or miswired IO power domains
  • Decoupling: add bulk capacitance for peak radio bursts

Step 6 — Do battery math with a realistic derating

Your first estimate should be conservative: assume lower usable capacity, include regulator losses, and reserve margin. Then validate against measurements and update the model as you learn.

A small power budget calculator you can paste and run

This script estimates average current and battery life for a periodic “wake cycle” device. It includes a simple derating factor and an efficiency factor for your power path.

#!/usr/bin/env python3
"""
battery_math.py — quick battery-life estimate for duty-cycled IoT devices.

Usage:
  - Edit the numbers in main() to match your device measurements.
  - The model is intentionally simple: it gets you in the right ballpark fast.
"""
from dataclasses import dataclass
from math import floor

@dataclass
class Profile:
  # currents in mA, durations in seconds
  sleep_uA: float
  period_s: float
  active_mA: float
  active_s: float
  radio_mA: float
  radio_s: float

@dataclass
class Battery:
  capacity_mAh: float        # datasheet/nominal
  usable_factor: float       # derating (temperature, cutoff, aging): 0.7–0.9 typical
  path_efficiency: float     # regulator efficiency: 0.75–0.95 typical

def avg_current_uA(p: Profile) -> float:
  # Convert each state's charge to microamp-hours then to average microamps.
  # µAh = (mA * s) / 3.6, and for sleep: µAh = (µA * s) / 3600
  sleep_s = max(p.period_s - p.active_s - p.radio_s, 0.0)

  q_sleep_uAh = (p.sleep_uA * sleep_s) / 3600.0
  q_active_uAh = (p.active_mA * p.active_s) / 3.6
  q_radio_uAh  = (p.radio_mA  * p.radio_s)  / 3.6

  q_total_uAh = q_sleep_uAh + q_active_uAh + q_radio_uAh
  # Average current in µA over the full period:
  return (q_total_uAh * 3600.0) / p.period_s

def estimate_life(b: Battery, i_avg_uA: float) -> dict:
  usable_mAh = b.capacity_mAh * b.usable_factor
  # account for power-path losses (less efficient means you "spend" more from the battery)
  effective_mAh = usable_mAh * b.path_efficiency

  hours = (effective_mAh * 1000.0) / i_avg_uA
  days = hours / 24.0
  years = days / 365.0
  return {"hours": hours, "days": days, "years": years}

def main():
  # Example device: wakes every 60s, does 150ms of compute and 50ms of radio.
  p = Profile(
    sleep_uA=7.0,
    period_s=60.0,
    active_mA=8.0,
    active_s=0.150,
    radio_mA=40.0,
    radio_s=0.050,
  )

  # Example battery/path: 2xAA usable ~1800mAh, 80% derating, 90% path efficiency.
  b = Battery(capacity_mAh=1800.0, usable_factor=0.80, path_efficiency=0.90)

  i_avg = avg_current_uA(p)
  life = estimate_life(b, i_avg)

  print(f"Average current: {i_avg:.1f} µA")
  print(f"Estimated life:  {life['days']:.0f} days (~{life['years']:.2f} years)")

  # Helpful extras:
  print(f"mAh/day:         {(i_avg * 24.0) / 1000.0:.2f} mAh")
  print("Reminder: derate further for cold, high pulse currents, and aggressive cutoff voltages.")

if __name__ == '__main__':
  main()

How to use it: replace the currents with measured values (sleep at the battery input, radio peak/average during TX/RX), and set a conservative usable factor. If your prediction is wildly optimistic, the usual culprit is hidden “always-on” current (Iq, leakage, sensor idle, or connected radio time).

Step 7 — Verify on battery, in the real environment

After you optimize, validate under the conditions that matter: battery supply (not bench PSU), temperature range, and realistic RF conditions. Watch for retries and brownouts — they can dominate your energy use.

Verification checklist

  • Run on the real battery chemistry and cutoff voltage
  • Test worst-case temperature (especially cold starts)
  • Measure wake-cycle duration and confirm it matches your budget
  • Simulate poor RF and verify retry/backoff behavior
  • Confirm sleep current after hours/days (some bugs “wake” later)

If results don’t match the math

  • Look for hidden states (connected radio, periodic scans)
  • Check for interrupts preventing deep sleep
  • Search for periodic timers that keep clocks alive
  • Audit peripherals left in “idle” not “off”
  • Re-measure Iq and leakage with sensors physically disconnected

Step 8 — Keep a power budget artifact (so changes don’t regress)

Treat power like a requirement. Keep a small “budget file” in your repo: the measured currents, the duty cycle, and your target. It makes reviews easier (“this feature adds 20 µA average”) and stops accidental regressions.

Example: a tiny power budget you can track in Git

This JSON is a human-readable summary of your power profile. Store it alongside firmware releases. It also helps when you revisit the project months later.

{
  "target": {
    "battery": "2xAA",
    "goal_days": 365,
    "notes": "Derate usable capacity to 80% to account for temperature and cutoff voltage."
  },
  "measured": {
    "sleep_uA_at_battery": 7.0,
    "wake_period_s": 60,
    "compute": { "current_mA": 8.0, "duration_s": 0.150 },
    "radio_tx": { "current_mA": 40.0, "duration_s": 0.050 }
  },
  "power_path": {
    "regulator_type": "buck",
    "estimated_efficiency": 0.90
  },
  "sanity_checks": [
    "No debug probe attached during sleep measurements",
    "LEDs disabled/removed",
    "Sensor EN pin is low during sleep",
    "Radio is OFF (not standby) between sends"
  ]
}
What “done” looks like

You’re not done when you hit a low sleep current once. You’re done when you can: (1) reproduce the measurement, (2) explain each state’s contribution, and (3) keep it stable across firmware changes.

Common mistakes

These are the patterns behind “our device dies in two weeks” and “it works on USB but not on battery.” Most fixes are boring — and that’s good news.

Mistake 1 — Optimizing firmware before measuring hardware floor

If your board leaks 200 µA in deep sleep, changing code won’t get you to “years.”

  • Fix: measure sleep current at battery input with all peripherals truly off.
  • Fix: audit regulator Iq, sensors, pull-ups, and leakage paths.

Mistake 2 — Treating “standby” as “off”

Many parts have multiple low-power states; the wrong one can be 10–100× higher.

  • Fix: explicitly command OFF/sleep states for sensors and radios.
  • Fix: verify with measurements (don’t trust assumptions).

Mistake 3 — Leaving GPIOs floating or biased the wrong way

Floating pins can toggle internal circuits and cause leakage through external components.

  • Fix: set all unused pins to a defined low-leakage state.
  • Fix: ensure sensor EN lines don’t back-power through IO.

Mistake 4 — Ignoring peak current and voltage droop

A battery can have “capacity” but still fail on peaks, causing retries and resets.

  • Fix: add bulk capacitance near the radio and power rails.
  • Fix: test on real battery at worst temperature; watch for brownouts.

Mistake 5 — Using a dev board as a power benchmark

Dev boards often include USB chips, LEDs, and converters that dominate sleep current.

  • Fix: measure on your production-like board (or strip extras).
  • Fix: disconnect the debugger and confirm deep sleep really happens.

Mistake 6 — Too chatty: sending tiny packets too often

Radio overhead can dominate even with small payloads.

  • Fix: batch samples and send less often.
  • Fix: add thresholds/hysteresis so “no change” doesn’t transmit.

Mistake 7 — Forgetting “always-on” overhead

Regulator Iq, pull-ups, and sensor idle currents silently tax your average current.

  • Fix: list every component connected to the battery and its sleep current contribution.
  • Fix: remove/replace the worst offenders first (Pareto wins).

Mistake 8 — Not budgeting for retries and bad RF days

Real deployments have interference, distance, and fading. Your power must survive that.

  • Fix: implement backoff and caps on retries per window.
  • Fix: log retry rates and treat them as a power metric.
The “mystery drain” checklist

If your math says “1 year” but you get “2 months,” suspect: regulator Iq, sensor idle, a periodic timer/interrupt preventing deep sleep, connected radio behavior (scanning/listening), or an LED. Start by re-measuring sleep current after a full hour of runtime.

FAQ

How low should sleep current be for a battery-powered IoT device?

A good rule of thumb is: single-digit µA sleep current for multi-month to multi-year targets. If your budget allows hundreds of µA average (big batteries, short life), you can tolerate more — but high sleep current often signals a design bug (Iq/leakage) that will bite you later.

Is an LDO or a buck converter better for power optimization in IoT?

It depends on your load profile. Bucks can be more efficient during active bursts, but some have higher quiescent current (Iq). For long-sleep devices, an LDO with ultra-low Iq can outperform a buck overall. The right answer comes from your measured profile: compare total average current including regulator losses, not just peak efficiency.

Why does my device last forever on a bench supply but dies quickly on battery?

Bench supplies hide voltage droop and peak limitations. Batteries have internal resistance, and radios draw peaks. On battery, the voltage can dip during TX, causing brownouts or retries — which wastes energy and shortens life. Test on the real battery and measure peak behavior.

What’s the fastest way to improve battery life without changing hardware?

Reduce radio time first: batch transmissions, send only on meaningful change (thresholds/hysteresis), and cap retries with sensible backoff. Then shorten the awake window: stop unnecessary peripherals, avoid long debug logs, and make deep sleep the default state between events.

How accurate is “battery math” and what should I derate?

Simple battery math is a great first estimate, but you should derate capacity for temperature, cutoff voltage, pulse current behavior, and aging. Many teams start with a usable factor like 0.8 and refine once they have field data. The biggest source of error is usually hidden always-on current, not the math.

What’s the best way to handle “I’m not sure” states to save power?

Add a safe fallback: if the device can’t connect, don’t hammer the radio. Back off exponentially, store samples locally, and try again later. Your device should fail gracefully without turning RF retries into a battery-drain loop.

Cheatsheet

A scan-fast checklist for power optimization in IoT: sleep modes and battery math.

Power workflow (repeatable)

  • Pick battery + lifetime goal → compute allowable Iavg
  • Measure sleep current at battery input (no debugger)
  • Measure radio peak and typical wake-cycle duration
  • Build a power profile: states + currents + durations
  • Optimize biggest contributors first (Pareto)
  • Verify on real battery + worst-case conditions
  • Track a “power budget” artifact in your repo

Highest-impact levers

  • Sleep more: >99% sleep for long-life devices
  • Shorten awake time: do work fast, then sleep
  • Transmit less: batch, compress, threshold
  • Fix Iq/leakage: hardware floor often dominates
  • Handle peaks: avoid brownouts and retries
  • Make pin states explicit: avoid floating/leakage

Quick “sanity targets”

Target Good starting point Why it matters
Sleep current < 10 µA (board-level) Sleep dominates long-life devices
Wake window tens to hundreds of ms Shorter awake time cuts mAh quickly
TX frequency As low as product allows Radio overhead is expensive
Retries Capped + backoff Prevents battery-drain loops
One-liner to remember

Battery life ≈ capacity / average current. Make average current measurable and intentional.

Wrap-up

Power optimization in IoT isn’t magic — it’s discipline. Put the device in deep sleep by default, turn off everything you don’t need, and keep a simple power budget so every feature has a known cost. The fastest path to “months instead of days” is usually: fix sleep current, reduce radio time, and verify on the real battery.

Next actions (pick one)

  • Measure your board-level sleep current today and write it down
  • Time your wake cycle and list the top 3 things it does while awake
  • Compute your power budget and identify the biggest contributor
  • Implement batching/thresholding so you transmit less often
  • Create a “power budget” JSON file in your repo and keep it updated

If you’re building a full IoT system, the related posts below cover the plumbing around power: radios, OTA, and system architecture.

Quiz

Quick self-check (demo). This quiz is auto-generated for hardware / iot / embedded.

1) What usually gives the biggest battery-life improvement in IoT power optimization?
2) Which measurement should you take first when you want “months to years” of battery life?
3) What’s a common reason “deep sleep” still drains the battery faster than expected?
4) In battery math for IoT, why should you include peak currents and retries in your thinking?