Kubernetes feels confusing until you stop trying to memorize objects and start following the traffic. In this post, we’ll build one “story” from the ground up: a Pod runs your app, a Service gives it a stable address, and Ingress lets the outside world reach it. Once that clicks, the rest of Kubernetes becomes variations on the same pattern.
Quickstart
The fastest way to “get it” is to deploy something tiny and trace the path a request takes. This quickstart creates a Deployment (Pods), exposes it as a Service, then proves connectivity. You’ll end with a short debugging checklist you can reuse forever.
The mental model (one sentence)
Pods are the workers, Services are the stable phone number, and Ingress is the front door + routing rules.
- If Pods change, the Service stays.
- If Services change, Ingress routes to the new Service.
- If the front door fails, your app may still be healthy internally.
Do this in 10 minutes
- Create a namespace so the demo doesn’t pollute anything else.
- Deploy a simple HTTP app as a Deployment (creates Pods).
- Expose it as a ClusterIP Service (stable virtual IP + DNS).
- Verify reachability from inside the cluster (debugging superpower).
Quickstart commands (copy/paste)
This uses an example container to keep the focus on Kubernetes wiring. Replace the image later with your own app. If anything fails, jump to the “Common mistakes” section—most issues are predictable.
# 0) Create an isolated namespace
kubectl create namespace story
# 1) Create a Deployment (Pods are managed by the Deployment controller)
kubectl -n story create deployment web --image=nginx:1.27
# 2) Expose it as a ClusterIP Service (stable in-cluster address)
kubectl -n story expose deployment web --port=80 --target-port=80 --name=web-svc
# 3) Watch Pods come up
kubectl -n story get pods -w
# 4) Prove the Service works from inside the cluster
kubectl -n story run curl --rm -it --image=curlimages/curl:8.7.1 --restart=Never \
-- curl -sS http://web-svc
# 5) Optional: port-forward to access it from your laptop (without Ingress yet)
kubectl -n story port-forward svc/web-svc 8080:80
It separates problems cleanly. If curl http://web-svc works inside the cluster, your Pods + Service are fine.
If outside access fails later, you know it’s an Ingress/DNS/load balancer issue—not your app.
Overview
People often learn Kubernetes backwards: they start with Helm charts, Ingress annotations, or “why is my pod pending?” But the core of Kubernetes is a simple system for running processes (containers), keeping them alive, and connecting them. If you understand Pods, Services, and Ingress as a single story, you can reason about most day-to-day issues without panic.
What this post covers
- Pods: what they are, why they are “ephemeral,” and how Deployments manage them.
- Services: stable addressing + load balancing to your pods via labels and selectors.
- Ingress: host/path routing at the edge and where an Ingress controller fits in.
- Debug flow: a simple order of checks that saves hours.
What you’ll be able to do after
- Explain “why my Service has no endpoints” in plain English.
- Trace a request: client → ingress → service → pod.
- Fix common misconfigurations (ports, selectors, readiness, namespaces).
- Read Kubernetes YAML with less guessing.
| In the story | Kubernetes object | What it does | Failure smell |
|---|---|---|---|
| Workers | Pod | Runs your containers (your app process) | CrashLoopBackOff, readiness failing |
| Manager | Deployment | Keeps the right number of Pods running | Replica count mismatch, rollout stuck |
| Phone number | Service | Stable virtual IP + DNS, routes to matching Pods | No endpoints, wrong selector, port mismatch |
| Front door | Ingress (+ controller) | Routes external HTTP(S) to Services | 404/503 at edge, controller logs show misroute |
Kubernetes is less about “starting containers” and more about declaring desired state. You say “I want 3 replicas,” and controllers keep making that true—even when pods die, nodes reboot, or new versions roll out.
Core concepts
Let’s build the mental model in a way that matches what you actually do at work: deploy an app, connect it, and expose it. These concepts are the “minimum set” that makes Kubernetes feel logical instead of magical.
Pods: the smallest deployable unit
A Pod is one or more containers that share networking and volumes. Most apps run one container per pod. The important bit: pods are disposable. They can be recreated any time (new node, new rollout, failure recovery). That’s why you don’t point clients directly at pod IPs.
Pods are “workers,” not “servers”
- They do work (serve HTTP, process jobs).
- They can come and go (IP changes).
- You should assume any pod can disappear.
Deployments keep workers stable
- Deployment says “I want N replicas.”
- It rolls out new versions gradually.
- It creates ReplicaSets behind the scenes.
Services: stable identity + traffic distribution
A Service is the stable address in front of your pods. It selects pods via labels and routes traffic to them. This is the core puzzle piece: labels connect everything. If the selector doesn’t match pod labels, the Service has no endpoints.
Services don’t “discover” pods by magic. They look for pods whose labels match the selector. No match = no endpoints = traffic black hole.
Ingress: routing rules at the edge
An Ingress is a set of HTTP(S) routing rules: hostnames and paths that map to Services. But Ingress is only the API object; the thing that actually implements those rules is an Ingress controller (like NGINX Ingress, Traefik, HAProxy, cloud load balancer controllers, etc.).
Ingress is not a load balancer by itself
- Ingress = desired routing configuration.
- Controller = the running reverse proxy / integration.
- External IP / DNS often comes from the controller’s Service.
Where TLS fits
- Ingress can terminate TLS (HTTPS) at the edge.
- Certificate is typically a Secret referenced by Ingress.
- Many teams automate certificates with cert-manager.
| Thing you want | Use | Why |
|---|---|---|
| Run an app reliably | Deployment | Self-heals and manages rollouts |
| Stable in-cluster address | Service (ClusterIP) | Load-balances to matching Pods |
| External HTTP(S) routing | Ingress + controller | Host/path rules + TLS at the edge |
| Quick local testing | port-forward | Skip edge complexity while debugging |
Step-by-step
Now let’s tell the full story from scratch as if we’re building a simple web app. We’ll go in the same order your traffic goes: Pods → Service → Ingress. Along the way, you’ll see the “debug path” that helps you isolate issues quickly.
Step 1 — Create a Deployment (the pod factory)
Start with a Deployment. Think of it as your “keep this running” contract. In practice, you rarely create standalone Pods for apps—you create Deployments (or StatefulSets/DaemonSets) that create Pods for you.
What to check after applying a Deployment
- Do Pods exist and are they Running?
- Are they Ready (readiness probe passing)?
- Are there image pull errors or crashes?
Step 2 — Add a Service (stable name + load balancing)
Your pods will get new IPs over time. A Service solves this by giving you a stable DNS name (e.g., web-svc)
and routing traffic to whichever pods match the selector.
A Service routes to Pods via label selectors. If your Deployment labels are app: web but your Service selects app: website,
you will get “no endpoints” and it will feel like Kubernetes is trolling you.
Deployment + Service example (readable YAML)
This is the simplest “app wiring” you’ll see everywhere. Read it as: “Run the pods with label app: web,
then create a Service that selects app: web and forwards port 80.”
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: story
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
namespace: story
spec:
type: ClusterIP
selector:
app: web
ports:
- name: http
port: 80
targetPort: 80
How to validate the Service
- Endpoints exist: the Service found matching Pods.
- Ports match: Service port → targetPort → containerPort.
- Readiness: only Ready pods become endpoints.
Why Services feel “magical”
Services create stable networking even when pods churn. That stability is what enables safe rollouts and autoscaling: clients keep calling the same name while Kubernetes swaps the backing pods.
Step 3 — Add Ingress (the front door)
If the Service is the stable internal phone number, the Ingress is the receptionist.
It answers requests for app.example.com (or a path like /api) and forwards them to the right Service.
The “catch” is that you need an Ingress controller installed in the cluster to actually implement the rules.
When to use Ingress
- You want one external IP for many services (virtual hosting).
- You want host/path routing (e.g.,
/api→ api-svc). - You want TLS termination (HTTPS) at the edge.
When not to (yet)
- You’re debugging internal connectivity—use Service + port-forward first.
- You’re exposing non-HTTP traffic—Ingress is primarily for HTTP(S).
- You don’t have an Ingress controller installed.
Ingress example (host + path routing)
This routes requests for story.local to web-svc. In a real setup, you’d point DNS to your ingress controller’s external IP
and (optionally) add TLS. Treat this YAML as a template: your controller may require a specific ingressClassName.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ing
namespace: story
spec:
ingressClassName: nginx
rules:
- host: story.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80
Don’t guess. Follow the path: (1) Ingress controller is running, (2) Ingress rule is accepted, (3) Service exists, (4) Endpoints exist, (5) Pods are Ready. Most failures are a missing controller, wrong class name, wrong service name, or no endpoints.
Step 4 — The debug path (use this every time)
Kubernetes debugging gets easy when you commit to an order. Don’t start with Ingress logs if pods are crashing. Don’t tweak Services if your selector doesn’t match. Use this flow:
Debug checklist: client → ingress → service → pod
- Pods: Are they Running and Ready? Any restarts? Any probe failures?
- Service: Does it exist? Do ports match? Are there endpoints?
- Ingress: Is the controller installed and healthy? Does the Ingress show an address?
- DNS/Client: Is the hostname pointing to the right external IP? Are you hitting the right host header/path?
If you can successfully curl the Service from inside the cluster but external traffic fails, the app is usually fine. Your issue is almost always in the “front door layer” (Ingress/DNS/LoadBalancer).
Common mistakes
Almost every “Kubernetes is broken” moment is one of these. The fixes are boring—because Kubernetes is actually very consistent once you know where the wiring lives.
Mistake 1 — Service has no endpoints
Symptoms: Service exists, but requests hang/fail; kubectl get endpoints shows empty.
- Cause: Service selector doesn’t match pod labels, or pods aren’t Ready.
- Fix: compare labels vs selector; check readiness probes and pod status.
Mistake 2 — Port mismatch (port vs targetPort)
Symptoms: endpoints exist, but you still get connection refused/timeouts.
- Cause: Service routes to the wrong container port.
- Fix: ensure Service
targetPortmatches containercontainerPort(or the actual listening port).
Mistake 3 — Ingress exists, but nothing routes
Symptoms: 404/503 at edge; Ingress has no address; controller logs show “no class” or “no backend.”
- Cause: no Ingress controller installed, wrong
ingressClassName, or wrong Service name/port. - Fix: confirm controller pods are Running; verify class name; confirm backend service/port exists.
Mistake 4 — Wrong namespace (everything “looks right”)
Symptoms: you apply YAML, but the Service can’t find pods; “not found” errors when referencing resources.
- Cause: Deployment in one namespace, Service/Ingress in another.
- Fix: always check
-nflags; consider setting a default namespace in your shell context.
Mistake 5 — Readiness probe blocks traffic (by design)
Symptoms: Pods are Running, but Service endpoints are missing or traffic is flaky.
- Cause: readiness probe failing (app not ready), so pods are removed from endpoints.
- Fix: check pod events/logs; make probe realistic; ensure dependencies are reachable.
Mistake 6 — Expecting Service type to do what Ingress does
Symptoms: using NodePort or LoadBalancer for many apps becomes messy fast.
- Cause: each Service wants its own external exposure; you really want shared routing.
- Fix: use Ingress for HTTP(S) routing; keep Service internal unless it truly needs direct exposure.
When something doesn’t work, ask: (1) Do my selectors match? and (2) Do my ports match? If you answer those with evidence, you’ll solve a surprising number of issues without touching anything else.
FAQ
What’s the difference between a Pod and a Deployment?
A Pod runs containers; a Deployment manages Pods. You usually create Deployments because they keep the desired number of pods running and handle rollouts safely. Creating a Pod directly is mostly for experiments and one-off debugging.
Why do I need a Service at all? Can’t I call the Pod IP?
Pod IPs are not stable. Pods get recreated and rescheduled; their IPs change. A Service provides a stable DNS name/virtual IP and routes traffic to matching pods via selectors, which is what makes scaling and rollouts workable.
Why does my Service say “no endpoints”?
Because the Service can’t find Ready Pods that match its selector. Check (1) the Service selector labels, (2) the Pod labels, and (3) whether pods are Ready (readiness probes passing). No match or not Ready = empty endpoints.
Is Ingress the same as a LoadBalancer Service?
No. A LoadBalancer Service typically exposes one Service externally. Ingress provides host/path routing for many Services behind one “front door” (implemented by an Ingress controller). For HTTP(S) apps, Ingress is the clean scaling pattern.
Do I always need an Ingress controller?
Yes, if you want Ingress to actually route traffic. The Ingress object is just configuration. The controller is the running component (reverse proxy/integration) that watches Ingress resources and enforces them.
What’s the fastest way to debug “external access fails”?
Test from inside the cluster first. If you can curl the Service internally, your Pods + Service wiring is good. Then focus on Ingress/controller health, host/path rules, and DNS pointing to the correct external address.
Cheatsheet
Save this. When Kubernetes feels confusing, use the “story path” and the checks below.
The story path (memorize this)
- Pod runs your app (ephemeral).
- Deployment keeps the right number of pods alive.
- Service gives a stable name and routes to pods by label selector.
- Ingress routes external HTTP(S) to Services (requires a controller).
The debug order (do not skip)
- Pods: Running? Ready? Logs/events?
- Service: selector matches? ports match? endpoints exist?
- Ingress: controller installed? class correct? rule accepted? address present?
- DNS/Client: host resolves to correct IP? correct host header/path?
High-signal checks
- Labels vs selectors: if they don’t match, nothing routes.
- Readiness matters: not Ready = not in endpoints.
- Ports matter: Service
targetPortmust hit the app’s listening port. - Namespaces matter: “not found” is often “wrong namespace.”
When to use what
- port-forward for quick local testing without edge complexity.
- ClusterIP Service for normal in-cluster traffic.
- Ingress for shared external HTTP(S) routing to many Services.
- LoadBalancer Service for direct external exposure (often infra components or simple cases).
Services route by label selectors. If you can explain what labels your pods have and what your Service selects, you’re already 70% of the way there.
Wrap-up
Kubernetes gets easier when you stop treating it like a bag of objects and start treating it like a flow. Pods do the work (and can change), Services keep a stable address (and route by labels), and Ingress is the front door (implemented by a controller) that routes external HTTP(S) to Services.
What to do next
- Repeat the story with your own app image (Deployment → Service → internal curl).
- Add readiness/liveness probes and see how they affect endpoints and rollouts.
- Install/confirm an Ingress controller and route a hostname to your Service.
- Move on to security and delivery: RBAC/NetworkPolicies, then GitOps and CI/CD patterns.
If you want a smooth learning path, the related posts below line up nicely: start with Docker concepts, then GitOps for safer deployments, and security basics to reduce risk as your cluster grows.
Quiz
Quick self-check (demo). This quiz is auto-generated for cloud / devops / kubernetes.