“Linux hardening” doesn’t have to mean a week-long checklist and twenty services you don’t understand. Most real-world compromises on servers come from a small set of repeat offenders: weak SSH access, missing security updates, overly-permissive network exposure, and not noticing suspicious activity until it’s too late. This Linux hardening 80/20 guide focuses on four moves that give you the biggest return: SSH, updates, firewall, and logs.
Quickstart
If you only have 30–60 minutes, do this. These steps reduce attack surface immediately and help you recover quickly if something goes wrong. The order matters: lock down access first, then reduce exposure, then make sure you can see what’s happening.
1) Don’t harden blind: keep a recovery path
Before you change SSH or firewall rules, open a second terminal session and confirm you can log in. If your provider has a console (VNC/serial), know how to use it.
- Open a second SSH session (keep it connected)
- Confirm you have sudo access from a non-root user
- Know your rollback: provider console, snapshot, or VM image
2) Patch now: update packages + enable security updates
Unpatched systems get hit by opportunistic scans constantly. Your goal: security patches land without “remembering to do it”.
- Update installed packages (reboot if needed)
- Enable unattended security updates (or a cron/systemd timer)
- Turn on time sync (logs without correct time are pain)
3) Harden SSH: keys only, no root login
SSH is the front door. The 80/20 is: key-based auth, no password login, no root login, and a small allow-list of users.
- Create a non-root sudo user (if you haven’t)
- Disable password auth after keys work
- Limit who can SSH in (AllowUsers)
4) Firewall: default deny incoming
If a service isn’t required, it shouldn’t be reachable from the internet. Default deny + explicit allow makes accidental exposure much harder.
- Default deny inbound
- Allow only what you use (usually SSH + your app port)
- Re-check after installing software (some packages open ports)
Make one change at a time, test it, then proceed. If you change SSH config and firewall rules together, it’s harder to know what broke (and harder to fix quickly).
One command block to get you started (Debian/Ubuntu)
This is a safe baseline you can run on a fresh server. It sets up a non-root user, patches the system, enables a basic firewall, and turns on a simple anti-bruteforce layer. Read it before you run it.
# Ubuntu/Debian baseline hardening (run as root once)
# 0) Create a non-root admin user
adduser deploy
usermod -aG sudo deploy
# 1) Patch the box
apt-get update && apt-get -y upgrade
# 2) Install essentials
apt-get -y install unattended-upgrades ufw fail2ban chrony
# 3) Enable unattended security upgrades (interactive prompt)
dpkg-reconfigure -plow unattended-upgrades
# 4) Enable time sync (log timestamps matter)
systemctl enable --now chrony
# 5) Firewall: default deny inbound, allow SSH
ufw default deny incoming
ufw default allow outgoing
ufw allow OpenSSH
ufw --force enable
# 6) Enable Fail2ban (uses auth logs to ban brute force)
systemctl enable --now fail2ban
# 7) Verify basics
id deploy
ufw status verbose
systemctl --no-pager status fail2ban
The same 80/20 applies on RHEL/Fedora/Alma/Rocky: update packages, use firewalld/nftables, harden sshd, and make sure logs are visible. The exact commands differ, but the mental model is identical.
Overview
Linux hardening is about reducing risk with the smallest set of changes that: (1) prevent common attacks, (2) limit blast radius, and (3) give you evidence when things go sideways. The “80/20” framing is useful because perfect security is not a thing—but high leverage is.
What this post covers (and what it skips)
| Area | What you’ll do | Why it matters |
|---|---|---|
| SSH | Keys-only auth, no root login, fewer features | Blocks brute force and weak credential compromises |
| Updates | Patch fast + automate security updates | Closes known vulnerabilities attackers scan for |
| Firewall | Default deny inbound; allow only required ports | Prevents accidental exposure and reduces attack surface |
| Logs | Make auth/network/service logs easy to inspect | Lets you detect abuse and debug incidents quickly |
We’re intentionally skipping deeper topics like full MAC policies (SELinux/AppArmor tuning), kernel sysctl hardening, intrusion detection suites, and centralized SIEM setups. Those can be great—after you’ve nailed the basics.
You can apply this approach to a VPS, a cloud VM, a home lab box, or a Raspberry Pi. It’s also a good baseline before you deploy containers—because containers don’t remove the need to secure the host.
Treat this post as your baseline, not the finish line. For high-stakes systems, add: strong MFA/SSO, disk encryption, secrets management, backups + restore drills, and structured monitoring/alerting.
Core concepts
1) Attack surface: “what can be reached” is what can be attacked
Most server compromises start with reachability. If a service is reachable from the internet, it’s a candidate for scanning, brute force, and exploit attempts. Hardening begins by reducing what’s exposed: fewer open ports, fewer login methods, fewer “nice-to-have” features.
High-leverage reduction
- SSH: keys only, no root login
- Firewall: default deny inbound
- Disable unused services
- Limit admin access to specific users
What reduction is not
- Changing the SSH port and calling it “secure”
- Installing ten security tools and never checking them
- Security-by-obscurity without strong authentication
- Hardening that breaks operations
2) The “golden path”: secure defaults + repeatability
The easiest server to secure is the one you can rebuild. The second easiest is the one that is configured consistently. Even if you aren’t using full Infrastructure-as-Code, try to be repeatable: document your baseline or keep a small bootstrap script.
3) Identity first: SSH is the front door
SSH compromise is common because it’s ubiquitous. The goal is not to make SSH “unhackable”—it’s to make: unauthorized logins rare, brute force ineffective, and privilege escalation difficult. Keys + no root login + restricted users solves most of that.
If password auth is enabled, every public SSH server becomes a brute-force target. If password auth is disabled, random internet scans largely turn into harmless noise in your logs.
4) Patching is a security control (not “maintenance”)
Vulnerabilities aren’t rare surprises—they’re a steady stream. Most attackers don’t need a novel exploit: they use known vulnerabilities against systems that haven’t updated. Automation (unattended security updates, reboot planning) is how you stay ahead without heroics.
5) Logs are your early warning system
Security controls fail sometimes. Logs help you answer the two questions that matter: “Did someone get in?” and “What did they do?” Even a lightweight logging habit (checking auth failures, new services, and unusual network activity) pays off.
80/20 hardening scorecard
| Move | Time | Typical attacks mitigated | “Done” looks like |
|---|---|---|---|
| SSH hardening | 10–25 min | Brute force, weak credentials, root takeover | Keys-only, root disabled, limited users |
| Updates automation | 10–20 min | Known CVEs, opportunistic exploits | Security updates applied automatically |
| Firewall allow-list | 5–15 min | Accidental exposure, scanning | Default deny inbound + explicit allows |
| Log visibility | 10–30 min | Silent compromise, slow brute force | You can quickly check auth + service events |
Step-by-step
This is a practical hardening path you can use on a VPS or cloud VM. It assumes you’re administering over SSH. Do this slowly, test after each step, and keep a rollback option.
Step 0 — Sanity checks (2 minutes)
- Second session open: keep an SSH session connected while you change settings
- Console access: verify you can reach a provider console in emergencies
- Backups/snapshots: snapshot before big changes (especially on production)
- Know your distro: Debian/Ubuntu vs RHEL-family commands differ
Step 1 — SSH: lock down authentication (the big one)
The “minimum strong SSH” is: non-root user + key-based login + no password authentication + restricted users. Everything else is optional tuning.
Checklist
- Create a non-root user and grant sudo
- Add your public key to
~/.ssh/authorized_keys - Test login with the key (before disabling passwords)
- Disable root login and password auth
- Restart SSH and re-test in a new session
Security notes
- Use long keys (ed25519 recommended where available)
- Prefer a passphrase on your private key
- Don’t copy private keys to servers
- Limit SSH features you don’t use (forwarding, X11)
A clean way to manage sshd settings on modern distros is a drop-in file under /etc/ssh/sshd_config.d/.
This avoids editing the vendor file and makes your changes obvious.
# /etc/ssh/sshd_config.d/99-hardening.conf
# Apply the essentials first; then optionally tighten further.
# Always keep a second SSH session open while testing.
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
AuthenticationMethods publickey
# Restrict who can SSH in (replace with your user(s))
AllowUsers deploy
# Reduce risky features if you don't need them
X11Forwarding no
AllowAgentForwarding no
AllowTcpForwarding no
PermitTunnel no
# Slow down brute force + improve logging visibility
MaxAuthTries 3
LoginGraceTime 30
ClientAliveInterval 300
ClientAliveCountMax 2
LogLevel VERBOSE
Disabling TCP forwarding is great for reducing abuse, but it can break legitimate port-forward usage (tunnels). If you rely on tunnels, keep forwarding enabled and focus on keys-only + user restrictions instead.
Step 2 — Updates: patch fast and make it automatic
Two good hardening habits: (1) install security updates automatically, and (2) plan reboots. Many security patches require a restart of a service (or the kernel) to take full effect.
Practical baseline
- Update packages weekly at minimum
- Enable unattended security updates (or equivalent)
- Schedule reboots (maintenance window) if kernel updates land
- Keep a short “what changed” note for production
What to watch out for
- Updates can restart services (brief outages)
- Custom configs can conflict with new defaults
- Third-party repos add risk; keep them minimal
- Don’t leave “pending upgrades” for months
Step 3 — Firewall: “deny by default” and be explicit
The firewall is where you prevent “oops, I installed a thing and now it’s public.” Your goal is simple: only the ports you intentionally expose are reachable.
Rule of thumb
For most small servers, inbound should be: SSH + your app/reverse proxy (e.g., 80/443) and nothing else. Databases and admin panels should not be reachable from the internet unless you have a specific reason and extra controls.
Some setups lock down IPv4 but leave IPv6 wide open. If your host has IPv6, make sure your firewall rules cover it, or disable IPv6 intentionally if you don’t use it.
Step 4 — Logs: make “what happened?” answerable in 60 seconds
You don’t need a full SIEM to benefit from logging. You need a few reliable views: authentication attempts, sudo usage, service restarts, and firewall/connection events (as appropriate). The key is consistency: the same few commands you run whenever you feel uneasy.
# 60-second security log triage (systemd/journald systems)
# 1) Recent SSH activity (last hour)
journalctl -u ssh -S "1 hour ago" --no-pager
# 2) Authentication failures (grep-like view)
journalctl -S "24 hours ago" | egrep -i "failed password|invalid user|authentication failure" | tail -n 50
# 3) Sudo usage (who elevated privileges?)
journalctl -S "24 hours ago" | egrep -i "sudo|COMMAND=" | tail -n 50
# 4) Firewall status (UFW example)
ufw status verbose
# 5) Fail2ban overview (if installed)
fail2ban-client status
fail2ban-client status sshd
What “normal” looks like
- Occasional failed SSH attempts from random IPs
- Expected logins from your admin IPs
- Package updates at predictable times
- Service restarts you can explain
What should trigger curiosity
- Successful logins you don’t recognize
- New users added unexpectedly
- Repeated sudo use at odd times
- New listening ports or services
Step 5 — Validate your hardening (small checks that prevent big mistakes)
- SSH: can you log in as your non-root user with keys? Is password auth disabled?
- Firewall: from another machine, can you reach only the ports you expect?
- Updates: is automatic security updating enabled and actually running?
- Logs: can you quickly view SSH and sudo activity?
Hardening is not a one-time event. The sustainable version is: small automation, small repetition, and a habit of checking the basics. If you can keep it boring, you can keep it secure.
Common mistakes
These are the classic “we did some security stuff but still got burned (or locked ourselves out)” patterns. The fixes are usually simple.
Mistake 1 — Disabling passwords before keys are tested
People paste an SSH key, flip PasswordAuthentication no, restart sshd… and then realize they were editing the wrong user or file.
- Fix: test key login in a new session first.
- Fix: keep a second SSH session open while changing sshd.
- Fix: know how to use provider console/serial access.
Mistake 2 — Leaving root login enabled “for convenience”
Root login dramatically increases impact if credentials are compromised, and it makes auditing “who did what” harder.
- Fix: use a normal user + sudo.
- Fix: restrict SSH users with
AllowUsers(or groups).
Mistake 3 — “Firewall enabled” but everything is allowed
A firewall that allows inbound by default is mostly decoration. The 80/20 is default deny + explicit allow.
- Fix: set default deny inbound and allow only SSH + required app ports.
- Fix: re-check after installing new services.
- Fix: make sure IPv6 is covered if it’s enabled.
Mistake 4 — “We update sometimes” (and then forget)
Manual updates are a calendar problem. A busy month becomes a vulnerability window.
- Fix: enable unattended security updates (or scheduled patching).
- Fix: plan a reboot process for kernel/critical service updates.
Mistake 5 — Installing security tools but never checking logs
Tools don’t help if nobody looks. You don’t need daily deep dives—just a quick “is anything weird?” routine.
- Fix: keep a short log triage checklist (see Step 4).
- Fix: ensure time sync is enabled for accurate timestamps.
Mistake 6 — Hardening that breaks operations
Security controls that block legitimate workflows get rolled back under pressure. Sustainable security means “secure and usable”.
- Fix: tighten in small steps; document exceptions (e.g., keep SSH forwarding if you need tunnels).
- Fix: test changes in staging or on a snapshot first when possible.
Some container setups manipulate iptables/nftables rules automatically, which can bypass naive firewall expectations. If you run containers, verify reachability from outside (not just local rules output).
FAQ
Do I need to change the SSH port for Linux hardening 80/20?
No—keys-only SSH and disabling password authentication are far more important. Changing the port may reduce noise in logs, but it doesn’t replace strong authentication and user restrictions.
Is UFW “good enough” for a small server?
Yes, for most single-host setups. UFW is a friendly interface to firewall rules and is perfectly fine as an allow-list: default deny inbound + allow SSH + allow your web ports. If you need complex routing, multi-interface policies, or advanced filtering, you may outgrow it—but the 80/20 is still the same.
How often should I apply updates?
Security updates should be applied automatically or at least weekly. For internet-facing servers, long patch delays create easy targets. If uptime is critical, use a maintenance window and staged rollouts, but avoid “we’ll do it later” for months.
What SSH settings give the biggest security improvement?
Disable root login and disable password authentication. Then restrict which users can SSH in (AllowUsers or AllowGroups), and reduce optional features you don’t use (forwarding, X11).
How do I harden without locking myself out?
Make one change at a time and test in a new session. Keep a second SSH connection open while editing sshd/firewall rules, and make sure you have provider console access or a snapshot.
What logs should I check first when something feels off?
Start with SSH/authentication logs and sudo activity. Look for unusual successful logins, repeated failures from a single IP, new users, or unexpected privilege escalation. Then check service restarts and any newly opened listening ports.
Should I install Fail2ban?
It’s a useful extra layer, especially if you can’t fully restrict SSH by IP. Fail2ban won’t replace keys-only SSH, but it can reduce brute-force noise and block repeated offenders automatically.
Cheatsheet
Scan this when you’re setting up a new server, reviewing an old one, or doing a “quick security tune-up.” If you can check off most of this, you’re in a strong 80/20 place.
SSH (access)
- Non-root user with sudo (no routine root login)
- SSH keys working for that user
- PasswordAuthentication disabled
- PermitRootLogin disabled
- AllowUsers/AllowGroups restricts SSH access
- Optional: disable forwarding/X11 if unused
Updates (patching)
- Regular updates applied (weekly minimum)
- Unattended security updates enabled (or scheduled patch job)
- Reboot plan exists for kernel/critical updates
- Minimal third-party repos
- Time sync enabled (chrony/systemd-timesyncd)
Firewall (exposure)
- Default deny inbound
- Allow only SSH and required application ports
- No public database/admin ports without a strong reason
- IPv6 considered (secured or intentionally disabled)
- Re-checked after new packages/services installed
Logs (visibility)
- You can view recent SSH logs quickly
- You can view sudo usage quickly
- Basic “weird activity” routine exists (5 minutes)
- Optional: Fail2ban installed and enabled
- Log timestamps are accurate (time sync)
Run updates, confirm firewall still matches reality, then skim SSH/sudo logs for anything unexpected. Ten minutes per week beats a six-hour incident later.
Wrap-up
The best part about Linux hardening 80/20 is that it’s realistic: you’re not trying to eliminate all risk, you’re trying to block the common paths attackers use and make your system resilient. If you do nothing else, remember the four pillars: SSH (keys-only, no root), updates (patch fast, automate), firewall (default deny inbound), and logs (know what changed and who logged in).
Suggested next actions (pick one)
- Apply the SSH drop-in config and verify you can log in safely
- Enable unattended security updates (or scheduled patching)
- Switch inbound policy to default deny and allow-list ports
- Adopt the 60-second log triage routine when anything feels off
If you want to go beyond 80/20 next, the natural progression is: threat modeling your setup, hardening your app/auth layer, adding monitoring/alerting, and tightening your CI/CD pipeline (DevSecOps). The related posts below are good “next reads” depending on what you’re building.
Quiz
Quick self-check (demo). This quiz is auto-generated for cyber / security / linux.