Core Kubernetes Pods – The Smallest Deployable Unit in Kubernetes
A Kubernetes Pod is the smallest and simplest deployable unit in a cluster. It represents one instance of a running process and encapsulates one or more tightly coupled containers that share the same network namespace and storage volumes. This deep-dive explains architecture, networking, lifecycle, health probes, controllers, scheduling, security, observability, disruption policies, best practices, and a practical troubleshooting playbook with kubectl and PowerShell.
What is a Pod?
A Pod is a logical wrapper around one or more containers that should run together on the same worker node. Containers in a pod:
- Share the same IP address and network namespace.
- Can communicate with each other over
localhost(e.g.,127.0.0.1). - Can share mounted volumes for data exchange and persistence.
- Are scheduled as a single unit by the Kubernetes scheduler.
- Pods abstract container details so controllers can manage desired state.
- Use a single primary application container per pod in most cases.
- Co-located helper containers use patterns like sidecar and init containers.
- Pods are ephemeral; use controllers like Deployments to maintain replicas.
General Pod FAQs (10)
- Is a Pod the same as a container? No. A pod can host one or more containers that run together and share resources.
- Why does Kubernetes use Pods instead of running containers directly? Pods provide grouping, shared networking, and lifecycle semantics that controllers can manage.
- Can a Pod span multiple nodes? No, a pod is always bound to a single node.
- How do I expose a Pod? Use a Service (ClusterIP/NodePort/LoadBalancer) or Ingress.
- How do Pods get IPs? Via the cluster CNI; each pod gets a unique IP in the flat network.
- Should I create Pods directly? For learning, yes; for production, use controllers (e.g., Deployment, StatefulSet).
- How do I persist data? Mount PersistentVolumes via PersistentVolumeClaims.
- How do I run a task once? Use a Job or CronJob, not a Deployment.
- How to keep config out of images? Use ConfigMaps and Secrets.
- How do I restart a pod? Delete it; the controller recreates it to match desired state.
Why Pods Matter
Pods provide a layer of abstraction that lets Kubernetes controllers manage scaling, rollouts, self-healing, and placement. By defining a pod template inside a controller (e.g., in a Deployment), you express the desired container image, resources, probes, environment, and volumes. The controller ensures the current state reconciles to the desired state.
- Controllers abstract the lifecycle—restarts, replicas, and rollouts.
- Pods are disposable; treat them as cattle, not pets.
- Declare everything in YAML; don’t patch ad-hoc in production.
- Workloads should be built to tolerate restarts and rescheduling.
Controller & Purpose FAQs (10)
- What maintains pod count? ReplicaSets (managed by Deployments).
- Do I edit pods directly? Edit the controller’s pod template; pods are ephemeral.
- How are rollouts handled? Deployments support rolling updates, strategies, and rollbacks.
- How do I pin a pod to a node? Use nodeAffinity or nodeSelector.
- How to avoid co-location? Use podAntiAffinity.
- How to ensure co-location? Use podAffinity.
- How to run one per node? Use a DaemonSet.
- How to preserve identity? Use a StatefulSet with stable names and storage.
- How to run to completion? Use a Job.
- How to handle cron tasks? Use a CronJob.
Container Grouping, Sidecars & Init Containers
Most pods run a single application container. Multi-container pods are used when processes are tightly coupled and must share network and volumes. Common patterns:
- Sidecar: a helper container (logging agent, proxy, metrics collector).
- Ambassador: a local proxy container that mediates outbound connections.
- Adapter: transforms output (e.g., converts logs or metrics formats).
- Init Containers: run to completion before app containers start (e.g., copy config, wait for dependencies).
- Init containers ensure prerequisites, then exit; their success gates app startup.
- Sidecars inherit the pod network; communicate via
localhost. - Keep sidecars minimal to avoid resource contention.
- Use readinessProbes that consider sidecar readiness if traffic depends on them.
Multi-Container Patterns FAQs (10)
- When should I use a sidecar? For cross-cutting concerns like logging or service mesh proxies.
- Do sidecars increase resource usage? Yes—set proper requests/limits.
- Can init containers use different images? Yes—great for tooling not needed at runtime.
- How do I share files across containers? Mount the same volume to both containers.
- How to order startup between containers? Use init containers; app containers start together after init.
- How to wait for a dependency? Use an init container with curl/bash loop to block.
- How to debug sidecar failures? Check
kubectl logsfor the sidecar container via-cflag. - Do sidecars complicate HPA? They can; consider separate deployments for scale-sensitive components.
- What about service mesh? Sidecars (proxies) are injected to enforce traffic policies and telemetry.
- Can sidecars be optional? Use feature flags or different pod templates per environment.
Pod Networking & Pod-to-Pod Communication
The Kubernetes networking model is flat. Every pod gets a unique IP; pods can reach each other without NAT. Containers in the same pod use localhost to communicate; cross-pod traffic uses pod IPs or Services for stable discovery and load balancing.
- Single IP per Pod: shared across all containers in the pod.
- Service Discovery: DNS entries created for Services (e.g.,
my-svc.default.svc.cluster.local). - Network Policies: optional rules to restrict traffic between pods/namespaces.
- Use Services for stable endpoints.
- Prefer DNS names over hard-coding IPs.
- Lock down east-west traffic with NetworkPolicies.
- Use Ingress or Gateways to manage north-south traffic.
Networking FAQs (10)
- How do I reach a pod directly?
kubectl port-forwardor a Service. - My pods can’t reach each other—why? Check CNI and NetworkPolicies.
- DNS not resolving? Inspect
corednslogs inkube-system. - How to restrict ingress/egress? Apply NetworkPolicies with labels/selectors.
- How do headless services work?
clusterIP: Nonecreates per-pod A records for stateful apps. - What is kube-proxy? Programs Service VIP rules on nodes.
- How to observe traffic? Use
tcpdump,wiresharkon nodes or eBPF tools. - How to expose externally?
type: LoadBalancer, Ingress, or Gateway. - How to do mutual TLS? Via service mesh or app-level TLS with Secrets.
- Why prefer Services over direct IP? Pod IPs are ephemeral; Services are stable.
Shared Storage & Volumes
Pods mount volumes to persist data or share files between containers. Use PersistentVolumeClaims (PVCs) to request storage from a StorageClass. For ephemeral scratch space, use emptyDir.
- PVC + PV: durable storage, survives pod restarts.
- ConfigMap/Secret volumes: mount configuration and credentials.
- Projected volumes: combine multiple sources.
- Use
ReadWriteOnce/Manybased on access needs. - Back up PVCs via CSI snapshots where available.
- Mount Secrets read-only; rotate regularly.
- Prefer StatefulSets for per-pod storage identity.
Storage FAQs (10)
- Why is my PVC pending? No matching StorageClass or capacity.
- Can multiple pods mount the same volume? Depends on access mode and CSI driver.
- How to migrate data? Use snapshots or volume cloning if supported.
- How to encrypt data at rest? Use CSI encryption or provider-level encryption.
- How to mount subpaths? Use
subPathin volumeMounts. - Secret too large? Use CSI Secrets Store or external vaults.
- How to share scratch space? Use
emptyDir. - How to mount read-only?
readOnly: trueon the mount. - PVC stuck in terminating? Finalizers or in-use mounts—check events and node references.
- How to back up PVCs? Snapshots,
velero, or storage-native tools.
Pod Lifecycle & Restart Policies
Pod phases: Pending, Running, Succeeded, Failed, Unknown. Container restart policy controls if containers restart: Always, OnFailure, Never. Controllers (e.g., Deployments) always recreate pods that go missing to match desired replica count.
- Graceful termination: SIGTERM →
terminationGracePeriodSeconds→ SIGKILL. - PreStop hooks: execute before termination (flush queues, deregister).
- PostStart hooks: run after container start.
- Design apps for idempotent startup/shutdown.
- Use hooks to orchestrate drains and deregistration.
- Right-size
terminationGracePeriodSecondsfor safe shutdowns. - Use Jobs for run-to-completion workloads.
Lifecycle & Policy FAQs (10)
- Pod keeps restarting—why? CrashLoopBackOff due to app error or failing probes.
- What triggers rescheduling? Node pressure, taints, failures, or scaling actions.
- How to delay shutdown? Increase
terminationGracePeriodSeconds. - How to run cleanup on shutdown? Use a preStop lifecycle hook.
- How to make a pod run once? Use a Job with
restartPolicy: OnFailureorNever. - What is ImagePullBackOff? Image unavailable/unauthorized; check pull secrets and registry.
- Why is my pod Pending? Insufficient resources or scheduling constraints.
- How to control restarts? Use restartPolicy; for managed restarts, rely on controllers.
- How to pause a rollout?
kubectl rollout pause deployment/<name>. - How to rollback?
kubectl rollout undo deployment/<name>.
Liveness, Readiness, and Startup Probes
Liveness restarts unhealthy containers. Readiness gates traffic until the app is ready. Startup prevents liveness from killing slow starters prematurely.
- Probe types:
httpGet,tcpSocket,exec. - Tune
initialDelaySeconds,periodSeconds,timeoutSeconds,failureThreshold,successThreshold.
- Never reuse liveness for readiness—keep them distinct.
- Set startup probes for slow frameworks and heavy migrations.
- Prefer lightweight health endpoints (
/healthz,/readyz). - Log probe failures; surface metrics to Prometheus.
Probes FAQs (10)
- My liveness kills the app during startup—why? Missing startup probe or long init time.
- What’s a good readiness check? Shallow dependency checks (DB ping, cache reachability).
- Is exec probe expensive? It can be; prefer HTTP/TCP when possible.
- Do probes need auth? Avoid unless necessary; isolate with networkPolicy if sensitive.
- How to expose only inside cluster? Bind to
127.0.0.1or use NetworkPolicy. - How to observe probe failures? Events, container logs, and metrics.
- Can probes cause restarts loops? Yes—tune thresholds and timeouts.
- What if app is event-driven? Use custom readiness logic or lightweight exec checks.
- How to gate traffic behind a sidecar? Readiness should consider sidecar state.
- Should I probe stateful systems? Yes, but make it fast and non-intrusive.
Scheduling: Requests, Limits, Affinity, Tolerations
Kubernetes schedules pods based on resources and constraints: CPU/memory requests (for placement) and limits (for capping). Use labels/selectors, affinity/anti-affinity, and tolerations/taints to influence placement.
- Requests guarantee scheduling; limits prevent noisy neighbors.
- Node affinity matches required labels.
- Tolerations allow scheduling onto tainted nodes.
- Right-size requests to avoid bin-packing waste or Pending pods.
- Use
topologySpreadConstraintsfor high availability across zones. - Reserve headroom for spikes (HPA, VPA as needed).
- Beware OOMKilled from low memory limits.
Scheduling & Resources FAQs (10)
- Why Pending? Requests exceed available node capacity or affinity conflicts.
- How to avoid CPU throttling? Set limits >= sustained usage; consider removing CPU limit for latency-sensitive apps.
- How to prevent noisy neighbor? Set limits and use QoS classes (Guaranteed/Burstable/BestEffort).
- How to keep replicas in different zones?
topologySpreadConstraintsacross zones. - How to run only on GPU nodes? Node labels + nodeAffinity.
- How to tolerate taints? Add matching tolerations to pod spec.
- Why OOMKilled? Memory limit too low or leak—check usage and tune.
- How to size requests? Start from observed P95 usage; iterate.
- How to pin to specific node?
nodeSelectoror affinity rules. - How to balance spread vs. pack? Use
preferredDuringScheduling(soft) vs.required(hard) rules.
Security Contexts & Namespaces
Use namespaces for isolation and quotas. SecurityContext (pod/container level) controls user/group IDs, Linux capabilities, seccomp, privilege escalation, and file permissions. Combine with RBAC, admission controls, and Pod Security standards.
- Run as non-root; drop capabilities.
- Read-only root filesystem where possible.
- Use network policies; restrict egress.
- Scan images; sign with provenance (SBOM, attestations).
- Namespace boundaries + RBAC enforce least privilege.
- Use service accounts with minimal permissions.
- Validate with policy engines (admission/OPA/Gatekeeper).
- Encrypt secrets in transit and at rest.
Security FAQs (10)
- How to run as non-root? Set
runAsUserandrunAsNonRoot: true. - How to block privilege escalation?
allowPrivilegeEscalation: false. - How to restrict capabilities?
capabilities: drop: [ALL], then add minimal set. - How to mount secrets securely? Read-only mounts, rotate, and audit access.
- How to isolate namespaces? NetworkPolicies, RBAC, and resource quotas.
- How to enforce security baselines? Pod Security admission (baseline/restricted).
- How to sign images? Use cosign and policy enforcement.
- How to prevent container escape? Harden kernel, seccomp, and disallow privileged pods.
- How to audit? Enable audit logs and centralize with SIEM.
- How to secrets from cloud vault? CSI Secrets Store or external secret operators.
Logs, Monitoring & Troubleshooting Fundamentals
Observe Pods with logs, events, metrics, and traces. Tools include kubectl, Prometheus, Grafana, and tracing systems. Start with kubectl get/describe, check events, inspect logs, and look at resource usage and probes.
- Always check
kubectl describe podfor events. - Differentiate app logs vs. sidecar logs.
- Watch pod status changes live with
--watch. - Correlate with node pressure and scheduler decisions.
Observability FAQs (10)
- Where are my logs?
kubectl logs <pod>(add-cfor specific container). - How to stream logs?
kubectl logs -f. - How to see previous crash logs?
kubectl logs --previous. - How to view events?
kubectl get events --sort-by=.lastTimestamp. - How to exec into a container?
kubectl exec -it <pod> -- /bin/sh. - How to get YAML?
kubectl get pod <pod> -o yaml. - How to debug network? Ephemeral debug containers or
kubectl debug. - How to see resource usage? Metrics server,
kubectl top pods. - How to tail all pods in a deploy? Label-selector with
-land loops. - How to capture node dmesg? SSH to node or use node-troubleshooter tooling.
Pod Disruption Budgets (PDBs) & Graceful Termination
PDBs limit simultaneous voluntary disruptions (e.g., upgrades, node drains). Graceful termination sends SIGTERM, waits for terminationGracePeriodSeconds, then SIGKILL.
- Set
minAvailableormaxUnavailableto keep SLOs. - Use preStop hooks to drain connections and commit offsets.
- Use PDBs for stateless and stateful workloads.
- Coordinate rollouts with HPA/VPA to avoid mass restarts.
- Ensure readiness removes pods from load balancers before SIGKILL.
- Make shutdown idempotent and fast.
Disruption & Termination FAQs (10)
- Why did drain fail? PDB prevents evicting too many replicas.
- How to do zero-downtime? Tune surge/unavailable and readiness gates.
- What if preStop hangs? Container gets SIGKILL after grace period.
- How to remove from LB first? Ensure readiness turns false before shutdown.
- How to upgrade safely? Slow rollouts, monitor errors, respect PDBs.
- How to handle cron drain? Temporarily raise
minAvailableor scale up first. - Why were too many evicted? Misconfigured PDB or replica counts too low.
- How to block eviction? Set stricter PDB or temporarily scale up.
- Does PDB apply to unplanned outages? No—PDBs protect against voluntary disruptions.
- Can I dry-run? Yes—use
--dry-run=serverto validate PDBs.
Best Practices for Pods
- Prefer Deployments for stateless apps; use StatefulSets for identity and storage.
- Keep pod images minimal and immutable.
- Set requests/limits; monitor for throttling and OOM.
- Use readiness to protect users from cold starts.
- Harden with SecurityContext and policies.
- Isolate with namespaces, quotas, and NetworkPolicies.
- Automate rollouts via pipelines and GitOps.
- Use topology spread to distribute replicas.
- Always label pods consistently for selection and observability.
- Instrument your app; expose
/metricsand health endpoints.
Minimal Pod YAML (with Probes & Resources)
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
app: hello
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: app
image: ghcr.io/example/hello:1.0.0
ports:
- containerPort: 8080
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
readinessProbe:
httpGet: { path: /readyz, port: 8080 }
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet: { path: /healthz, port: 8080 }
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: config
mountPath: /etc/app
readOnly: true
volumes:
- name: config
configMap:
name: hello-config
terminationGracePeriodSeconds: 30
Multi-Container Pod with Sidecar & Init Container
apiVersion: v1
kind: Pod
metadata:
name: api-with-sidecar
labels:
app: api
spec:
initContainers:
- name: init-wait-db
image: alpine:3.20
command: ["sh", "-c", "until nc -z db 5432; do echo waiting for db; sleep 2; done"]
containers:
- name: api
image: ghcr.io/example/api:2.3.1
ports: [{containerPort: 8080}]
readinessProbe: { httpGet: { path: /readyz, port: 8080 }, initialDelaySeconds: 5 }
livenessProbe: { httpGet: { path: /healthz, port: 8080 }, initialDelaySeconds: 15 }
volumeMounts:
- { name: shared, mountPath: /var/shared }
- name: log-sidecar
image: ghcr.io/example/log-collector:1.0.0
args: ["--watch=/var/shared/logs"]
volumeMounts:
- { name: shared, mountPath: /var/shared }
volumes:
- name: shared
emptyDir: {}
Troubleshooting Pods – Practical Playbook (kubectl & PowerShell)
Use these step-by-step checks when a pod is Pending, CrashLooping, failing probes, or experiencing network/storage issues. You can run the commands on Linux/macOS shells or from PowerShell (cross-platform). For Windows admins, we include PowerShell snippets to wrap kubectl.
1) Fast Status Sweep
# get pods with wide info
kubectl get pods -o wide
# events sorted by time
kubectl get events --sort-by=.lastTimestamp
# describe a pod to see scheduling, probes, and last errors
kubectl describe pod <pod-name>
# YAML (actual state)
kubectl get pod <pod-name> -o yaml
PowerShell Wrapper for Fast Sweep
$Pod = Read-Host "Pod name"
kubectl get pod $Pod -o wide
"--- Events ---"
kubectl get events --sort-by=.lastTimestamp | Select-String $Pod
"--- Describe ---"
kubectl describe pod $Pod
"--- YAML ---"
kubectl get pod $Pod -o yaml | Out-File -FilePath ".\${Pod}-snapshot.yaml"
Write-Host "Snapshot written to ${Pod}-snapshot.yaml"
2) CrashLoopBackOff / Image Issues
# get last container logs (including previous crashes)
kubectl logs <pod> --all-containers --previous
# watch restarts
kubectl get pod <pod> -w
# check image pulls and secrets
kubectl describe pod <pod> | egrep -i "image|pull|secret"
# restart by deleting (controller restores)
kubectl delete pod <pod>
PowerShell: Crash Collector
$Pod = Read-Host "Pod"
$File = ".\${Pod}-crashlogs.txt"
kubectl logs $Pod --all-containers --previous | Out-File $File
kubectl describe pod $Pod | Out-File -Append $File
Write-Host "Collected crash context in $File"
3) Probe Failures (Liveness/Readiness/Startup)
# view probe config
kubectl get pod <pod> -o jsonpath='{.spec.containers[*].readinessProbe}{"\n"}{.spec.containers[*].livenessProbe}{"\n"}'
# look at failing endpoints
kubectl port-forward pod/<pod> 8080:8080
curl -v http://127.0.0.1:8080/readyz
4) Pending Pods / Scheduling
# why pending?
kubectl describe pod <pod> | egrep -i "failedScheduling|taint|insufficient|preempt"
# see node resources
kubectl get nodes -o wide
kubectl describe node <node> | egrep -i "Allocated resources|Taints|Conditions"
# requests/limits snapshot for a deployment
kubectl get deploy <name> -o=jsonpath='{range .spec.template.spec.containers[*]}{.name}{" "}{.resources}{"\n"}{end}'
PowerShell: Scheduling Inspector (Label + Taints)
$Node = Read-Host "Node"
kubectl get node $Node --show-labels
kubectl describe node $Node | Select-String -Pattern "Taints","Conditions","Allocated resources"
5) Networking & DNS
# DNS in pod
kubectl exec -it <pod> -- cat /etc/resolv.conf
kubectl exec -it <pod> -- nslookup my-svc.default.svc.cluster.local
# reach another pod/service
kubectl exec -it <pod> -- /bin/sh -c "nc -zv my-svc 8080"
# test from your machine via port-forward
kubectl port-forward svc/my-svc 8080:80
6) Storage & PVC
# PVC and PV health
kubectl get pvc,pv
kubectl describe pvc <claim>
# file permissions inside container
kubectl exec -it <pod> -- ls -l /mnt/data
# check mount options
kubectl get pod <pod> -o jsonpath='{.spec.volumes}'
7) Events & Timeline Bundle (PowerShell)
$Ns = Read-Host "Namespace"
$Sel = Read-Host "Label selector (e.g. app=api)"
$When = Get-Date -Format "yyyyMMdd-HHmmss"
$Out = ".\k8s-bundle-$When.txt"
"--- Pods ---" | Out-File $Out
kubectl get pods -n $Ns -l $Sel -o wide | Out-File -Append $Out
"--- Events ---" | Out-File -Append $Out
kubectl get events -n $Ns --sort-by=.lastTimestamp | Out-File -Append $Out
"--- Describes ---" | Out-File -Append $Out
kubectl get pods -n $Ns -l $Sel -o name | ForEach-Object { kubectl describe -n $Ns $_ | Out-File -Append $Out }
Write-Host "Wrote $Out"
Labels & Selectors – The Glue for Discovery
Apply consistent labels like app, component, version, env to drive Service selection, NetworkPolicies, PDBs, and autoscaling. Selectors bind Services/PDBs/HPA to matching pods.
- Define a label taxonomy in your platform standards.
- Never break selectors during rollouts.
- Use annotations for non-selective metadata (links, tickets, owners).
- Surface labels to logs/metrics for rich filtering.
Labels & Selectors FAQs (10)
- Why isn’t my Service routing? Selector doesn’t match pod labels.
- How to migrate labels safely? Rollout with both old and new labels until switch.
- How to find pods by label?
kubectl get pods -l key=value. - Can annotations be selectors? No—only labels are used for selection.
- How to version traffic? Use labels (e.g.,
version=v2) and Services/Ingress rules. - How to group multiple apps? Use hierarchical labels:
team,domain, etc. - How to standardize? Define org-wide label keys and enforce via policy.
- How to query by multiple labels?
-l key1=val1,key2=val2. - How to see labels?
--show-labelsflag onkubectl get. - How to avoid selector drift? GitOps and CI checks that prevent accidental changes.
Service Association – Stable Endpoints for Pods
Pods are ephemeral; Services provide stable virtual IPs and DNS names. Use ClusterIP inside the cluster, NodePort for lab/testing, LoadBalancer for cloud L4, and Ingress/Gateway for HTTP routing and TLS.
- Headless Services enable direct pod addressability for stateful apps.
- Health checks + readiness determine Service membership.
- Favor Ingress/Gateway for managed HTTP routing.
- Maintain backward compatibility during blue/green or canary rollouts.
Service & Ingress FAQs (10)
- Which type to use for internet? LoadBalancer + Ingress/Gateway.
- How to canary? Label-based subset routing or service mesh.
- Why 503s after deploy? Readiness not true; check probes and surge settings.
- How to sticky sessions? SessionAffinity or app-level solutions.
- TLS termination? Ingress/Gateway controllers manage certs via cert-manager.
- How to expose gRPC? Ingress/Gateway with HTTP/2.
- What about WebSockets? Ensure controller supports upgrade headers/timeouts.
- ExternalName services? DNS CNAME to external targets.
- Why does NodePort seem random? Allocated from cluster’s nodePort range; can set explicitly.
- How to rate-limit? Ingress/Gateway policies or service mesh.
Common Multi-Container Use Cases
- Log sidecar shipping container logs to a centralized system.
- Reverse proxy sidecar for mTLS and outbound policies.
- Data fetcher sidecar periodically syncing reference data.
- Adapter converting metrics/log formats.
- Debug ephemeral container injected on demand.
Use-Case FAQs (10)
- Do sidecars scale with the app? Yes—replica count is per pod.
- How to avoid sidecar overhead? Consolidate to node-level agents or meshless patterns where feasible.
- How to guard secrets in sidecars? Mount minimal scope secrets and restrict permissions.
- How to ensure sidecar readiness? Add readiness probes for both main and sidecar.
- What if sidecar crashes? Pod may restart depending on restart policy.
- How to test locally? Use kind or minikube.
- How to refactor away from sidecars? Adopt node agents or mesh with daemonsets.
- How to limit sidecar CPU? Separate requests/limits per container.
- How to share TLS certs? Mount a projected secret volume.
- How to roll image updates? Standard Deployment rollouts or GitOps automation.
Reusable Snippets: Affinity, Tolerations, Probes, PDB
# Affinity (prefer spread across zones)
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: topology.kubernetes.io/zone
labelSelector:
matchLabels: { app: api }
# Toleration to run on tainted nodes
spec:
tolerations:
- key: "workload"
operator: "Equal"
value: "background"
effect: "NoSchedule"
# Startup probe to protect slow boot
spec:
containers:
- name: app
startupProbe:
httpGet: { path: /healthz, port: 8080 }
failureThreshold: 30
periodSeconds: 10
# Pod Disruption Budget (keep 90% available)
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: api-pdb
spec:
minAvailable: "90%"
selector:
matchLabels:
app: api
PowerShell Troubleshooting Pack – Export, Diff, and Heal
These PowerShell helpers work on Windows/macOS/Linux with kubectl installed and configured. They speed up export, diff, and targeted restarts through controllers.
Export a Deployment’s Current Pod Template
$Deploy = Read-Host "Deployment name"
$Ns = Read-Host "Namespace"
$Out = ".\${Deploy}-podtemplate.yaml"
kubectl get deploy $Deploy -n $Ns -o jsonpath='{.spec.template}' | Out-File $Out
Write-Host "Wrote $Out (pod template snapshot)"
Diff Desired vs. Live Pod YAML
$Pod = Read-Host "Pod name"
$Live = ".\${Pod}-live.yaml"
$Ref = Read-Host "Reference YAML path"
kubectl get pod $Pod -o yaml | Out-File $Live
Compare-Object (Get-Content $Ref) (Get-Content $Live) -SyncWindow 2 | Out-File ".\${Pod}-diff.txt"
Write-Host "Diff written to ${Pod}-diff.txt"
Safe Pod Restart via Controller
$Deploy = Read-Host "Deployment"
$Ns = Read-Host "Namespace"
kubectl rollout restart deploy/$Deploy -n $Ns
kubectl rollout status deploy/$Deploy -n $Ns
Putting It All Together
Pods are the execution boundary for your Kubernetes workloads. Treat them as ephemeral, declare robust probes, right-size resources, secure with contexts and policies, and expose them through Services and Ingress. Build strong observability and a repeatable troubleshooting workflow using kubectl and PowerShell. With these patterns, your applications will be resilient, observable, and ready to scale.









Leave a Reply