Code reviews aren’t just about catching bugs. They’re how teams teach standards, prevent “unknown unknowns,” and build trust. The catch: even correct feedback can land badly if it sounds personal, vague, or absolute. This guide gives you comment templates that improve code and relationships—so reviews feel faster, clearer, and less draining.
Quickstart
Want immediate impact without changing your whole process? Use these four habits on your next PR. They reduce back-and-forth, improve clarity, and keep your tone “firm but friendly.”
1) Lead with intent + outcome
Make it obvious you’re improving the code, not judging the person.
- Start with: “To make this safer/faster/easier to maintain…”
- State the risk or benefit in one line
- Then ask a question or propose a fix
2) Label severity (blocker / suggestion / nit)
People get defensive when they can’t tell what’s mandatory vs optional.
- Blocker: correctness, security, data loss, SLA
- Suggestion: design, readability, future-proofing
- Nit: tiny style, naming, formatting
3) Make the next step obvious
Don’t just point at a problem—give a path forward.
- Include a concrete example (“e.g., …”)
- Link to a standard (lint rule, style guide, ADR)
- Offer to pair when it’s non-trivial
4) End with a small kindness
One sincere “nice catch” or “good separation” changes the whole tone.
- Call out one good thing per review (when true)
- Thank them for context or tests
- Assume positive intent by default
Review the code, not the coder. Replace “you” with “this change” and your comments get 30% calmer instantly.
Overview
“Be nice in code reviews” is good advice—until you need to block a risky change, push for better tests, or challenge an API design. The goal isn’t niceness. The goal is clarity without friction.
What you’ll get from this post
- A practical mental model for writing review comments that land well
- Reusable templates for blockers, questions, suggestions, and nits
- A step-by-step workflow for reviewing quickly without missing important issues
- Common failure modes (and how to fix them)
- A scan-fast cheatsheet + a quiz to lock it in
| Review goal | What “good comments” do | What “bad comments” do |
|---|---|---|
| Improve correctness | Explain risk, point to evidence, propose fix | Sound absolute (“this is wrong”), no path forward |
| Improve maintainability | Connect to future changes, shared conventions | Debate preferences, bikeshed endlessly |
| Teach the team | Share principles + references, not lectures | Shame or gatekeep |
| Ship faster | Prioritize and be explicit about severity | Dump a long list with no ordering |
The fix is usually not “be softer.” It’s: be more specific (why + risk + next step) and be clearer about priority.
Core concepts
Strong code review comments share a structure: context → concern → suggestion. They also share a mindset: we’re collaborating on a system, not scoring points. Here are the core ideas that make that happen.
1) Separate intent from impact
Most PRs are made with good intent. But good intent doesn’t prevent bugs, regressions, or long-term complexity. Your job as reviewer is to describe impact in a way the author can act on.
Instead of…
- “This is messy.”
- “Why would you do it this way?”
- “This makes no sense.”
Use…
- “I’m worried this will be hard to debug because…”
- “What’s the reason for X instead of Y? I might be missing context.”
- “Can we add a comment/test here? Without it, I’m not sure about…”
2) Use a “severity ladder” to avoid drama
Disagreements get heated when reviewers treat preferences like emergencies, or authors treat safety concerns like opinions. A severity ladder keeps everyone aligned.
| Label | Meaning | Great opening phrase |
|---|---|---|
| Blocker | Must change before merge (correctness/security/data loss) | “Blocker: This can cause X when Y. Suggest…” |
| Suggestion | Strongly recommended, but negotiable | “Suggestion: To make this easier to maintain, consider…” |
| Question | Seeking understanding / missing context | “Question: What’s the expected behavior when…?” |
| Nit | Minor style/wording; skip if time is tight | “Nit: Tiny thing—could we rename…” |
| Praise | Reinforces good patterns | “Nice—this abstraction makes X easier.” |
3) Prefer evidence over certainty
“This will break” triggers defensiveness. “This might break because…” invites collaboration. When you can, ground the comment in something checkable: a failing test, a race condition scenario, a spec link, an example input, a performance constraint, or a previous incident.
Absolutes with no proof: “obviously,” “clearly,” “just,” “everyone knows,” “this is bad.” Swap them for evidence and your comment becomes harder to argue with—and easier to accept.
4) Own your preference, don’t force it
Some feedback is subjective (naming, structure, style). State it as preference and tie it to a reason. That shows respect and reduces “power struggle” vibes.
A safe phrasing pattern
“I prefer X because Y. If we go with Z, can we at least ensure W?” This keeps the conversation productive even when you disagree.
Step-by-step
Here’s a review workflow you can use whether you’re a junior dev on your first PR review or a staff engineer reviewing high-risk changes. It’s designed to be fast, consistent, and respectful—without letting risky code sneak through.
Step 0 — Set your default state: curious, not combative
Before you write anything, assume there’s context you don’t have yet. Your first job is to understand the intent of the change. The second job is to improve it.
Fast questions to calibrate
- What problem is the PR solving?
- What are the failure modes?
- What would be hardest to debug later?
- What’s the smallest safe version we can ship?
If the PR description is thin…
- Ask for context once (politely)
- Suggest adding it to the PR template
- Don’t punish the author with 30 follow-ups
Step 1 — Scan for risk first (don’t bikeshed early)
Start by finding the big rocks: correctness, security, data integrity, performance, and user impact. If there’s a true blocker, call it out early so the author doesn’t waste time polishing nits.
Blocker checklist (quick)
- Inputs validated? (nulls, empties, ranges, encoding)
- Error handling sensible? (retries, timeouts, fallbacks)
- Concurrency safe? (shared state, async, transactions)
- Security/privacy OK? (authz, secrets, PII logs)
- Tests cover the new behavior + edge cases?
Step 2 — Write comments with a consistent “shape”
A good comment is actionable in under 20 seconds: it explains why, points to what, and suggests how. Use this pattern as your default:
The “C-C-S” pattern
- Context: what line/behavior you’re talking about
- Concern: why it matters (risk/maintenance/readability)
- Suggestion: a concrete next step (example, alternative, question)
If your comment feels tense, try one of these softeners that don’t weaken your point: “I might be missing context…”, “Would you be open to…”, “What do you think about…”, “Could we consider…”.
Step 3 — Copy/paste templates (and customize them)
Save these in a snippet manager, or drop them into a CONTRIBUTING.md so the team shares a vocabulary.
The goal is not robotic reviews—it’s reducing friction.
## Review comment tags (use at the start of a comment)
- **Blocker:** must change before merge (correctness/security/data loss)
- **Suggestion:** recommended improvement (design/readability/perf)
- **Question:** missing context / clarify behavior
- **Nit:** minor style/wording (skip if time is tight)
- **Praise:** call out a good pattern (reinforces standards)
## Templates
**Blocker (risk + fix):**
> **Blocker:** This can cause {impact} when {scenario}.
> Suggestion: {concrete change}.
> (Optional) Test idea: {test case}.
**Suggestion (reason + option):**
> **Suggestion:** To make this easier to maintain, consider {approach}.
> Reason: {why}.
> If you prefer current approach, could we at least {minimal safeguard}?
**Question (curiosity + goal):**
> **Question:** What’s the expected behavior when {edge case}?
> I’m asking because {risk/assumption}.
**Nit (low stakes):**
> **Nit:** tiny thing — could we rename {name} to {better name} for clarity?
**Praise (specific):**
> Nice — {specific thing} makes {benefit} much easier.
Step 4 — Turn a “rude” comment into a strong one
Most rude-sounding comments are just missing context and a path forward. Here’s a concrete before/after using a realistic diff. Notice how the improved comment names the risk and proposes an alternative, without implying incompetence.
diff --git a/src/cache.ts b/src/cache.ts
index 4a1c2d1..f9b12aa 100644
--- a/src/cache.ts
+++ b/src/cache.ts
@@ -1,10 +1,16 @@
export async function getUser(userId: string) {
- const cached = cache[userId]
- if (cached) return cached
+ const cached = cache[userId]
+ if (cached) return cached
const user = await fetchUser(userId)
- cache[userId] = user
+ cache[userId] = user
return user
}
-// Reviewer comment (weak / likely to escalate):
-// "This is bad. Don't cache like this."
+// Reviewer comment (better):
+// "Blocker: this cache can grow unbounded and keep user data forever.
+// That’s a memory risk and can also violate retention rules.
+// Suggestion: add TTL + max size, or use an LRU cache. If we keep this simple,
+// could we at least add an eviction strategy and a test for repeated calls?"
Step 5 — Use automation to reduce “nit” comments
The best way to keep reviews friendly is to avoid arguing about formatting and trivia. Let tooling catch it so humans can focus on correctness and design.
If you see the same nit in reviews more than twice, automate it (formatter, linter, CI check) or document it. Reviews should be for thinking, not whitespace.
name: CI
on:
pull_request:
branches: ["main"]
jobs:
build-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install
run: npm ci
- name: Lint
run: npm run lint
- name: Test
run: npm test
# Why this helps reviews:
# - Lint/format issues get caught automatically.
# - Review comments shift toward behavior, edge cases, and design.
Step 6 — Close the loop cleanly
Once the author responds, keep the tone consistent. If they fix it, acknowledge it. If you still disagree, narrow the debate to what matters (risk and constraints).
Good “closing” phrases
- “Thanks — that addresses the risk.”
- “Looks good now. Approved.”
- “I’m satisfied with this tradeoff given the deadline.”
- “Let’s capture the follow-up in an issue so it doesn’t vanish.”
When you still disagree
- Restate the risk and ask for a minimal safeguard
- Offer a short sync if it’s going in circles
- Escalate to an agreed “tie-breaker” (tech lead, owner)
Common mistakes
Most review conflicts come from predictable patterns. Fix these and your reviews get faster, calmer, and more effective.
Mistake 1 — Vague criticism
“This is weird” doesn’t tell the author what to do next.
- Fix: name the impact (“hard to debug,” “risk of race,” “hard to test”).
- Fix: propose one concrete alternative or ask a clarifying question.
Mistake 2 — Treating preferences like blockers
Bikeshedding drains time and trust.
- Fix: label as Nit or Suggestion.
- Fix: connect preferences to a team standard or measurable benefit.
Mistake 3 — “You” language and absolutes
“You messed this up” invites defense. “This change can…” invites collaboration.
- Fix: replace “you” with “this change/this function/this path”.
- Fix: remove “obviously/just/clearly.” Add evidence instead.
Mistake 4 — Asking for big redesigns late
Surprise architecture debates after the author wrote everything feel unfair.
- Fix: comment early on structure, before reviewing nits.
- Fix: propose a “minimal safe merge” + follow-up issue.
Mistake 5 — Comment dumps with no prioritization
Authors can’t tell what matters most, so everything feels like a blocker.
- Fix: summarize at top: “1 blocker, 2 suggestions, 3 nits.”
- Fix: group related comments (“tests,” “API shape,” “naming”).
Mistake 6 — Reviewing without running anything
Speculation sounds harsh when it’s wrong.
- Fix: run the test/lint target when feasible, or ask if it was run.
- Fix: frame uncertainty honestly: “I’m not sure—does this pass when…?”
Adopt a shared vocabulary (Blocker/Suggestion/Nit/Question/Praise) and you’ll see fewer “tone problems,” because people can finally tell what’s urgent and what’s optional.
FAQ
How do I give negative feedback in code review without being rude?
Make it about the change and its impact: “This can cause X when Y, so I suggest Z.” Add a next step (example fix, test idea, or question). Avoid “you” language and absolutes.
What should be a blocker vs a suggestion?
A blocker is for correctness, security, data loss, privacy, or production reliability. A suggestion is for design, readability, performance improvements, or maintainability—important, but negotiable. When in doubt, label it and explain why.
Is it okay to leave “nit” comments?
Yes—if you label them as nits, keep them small, and don’t flood the PR. Better yet, automate repeated nits with formatters/linters so humans don’t spend social capital on whitespace.
How do I respond to a review comment I disagree with?
Don’t argue preference vs preference. Clarify intent and constraints: “I chose this because…” Offer a compromise: “If we keep this approach, I can add X safeguard/test.” If it’s still stuck, propose a short sync.
How many comments are too many?
It depends, but quantity without prioritization is the real problem. Summarize first (“1 blocker, 2 suggestions…”) and group comments by theme. If the PR is huge, suggest splitting it.
What’s the best way to praise in code reviews?
Be specific: praise a decision and its benefit (“Nice separation—this makes the error path easier to test.”). Specific praise reinforces standards and reduces the “reviews only find problems” vibe.
Cheatsheet
A scan-fast checklist for writing code review comments that improve code (without being rude).
Comment structure (copy this)
- Context: what line/behavior you mean
- Concern: why it matters (risk/maintenance)
- Suggestion: what to do next (example)
- Severity tag: Blocker / Suggestion / Question / Nit
Tone checks (10 seconds)
- Remove “obviously/just/clearly.”
- Replace “you” with “this change”.
- Prefer questions for missing context.
- Add one sincere praise when true.
High-signal opening phrases
| When you want to… | Try… |
|---|---|
| Flag risk | “Blocker: This can cause X when Y. Suggest…” |
| Offer improvement | “Suggestion: To make this easier to maintain, consider…” |
| Ask for context | “Question: What’s the expected behavior when…?” |
| Be gentle about preferences | “Nit: small preference—would you be open to…” |
| Reinforce good patterns | “Nice — this makes X simpler because…” |
- “This is bad.” (too vague)
- “Why would you…?” (sounds accusatory)
- “Just do X.” (dismissive; ignores constraints)
- “Obviously…” (adds heat, not clarity)
Wrap-up
The best code review comments aren’t the most polite or the most blunt—they’re the most useful. When you label severity, explain impact, and offer a concrete next step, your reviews become calmer and faster while still protecting quality.
Your next PR review: a 3-step plan
- Start with risk (blockers) before style (nits)
- Write comments using context → concern → suggestion
- Summarize priorities at the end (and add one sincere praise)
If you want to level up quickly, pair this with a team checklist for debugging and a few shared conventions. The best teams don’t rely on everyone “being nice”—they rely on systems that make the right behavior easy.
Quiz
Quick self-check. This quiz is auto-generated for programming / code / reviews.