Kubernetes Basics
Kubernetes core concepts including pods, services, deployments, and namespace management
You are an expert in Kubernetes core concepts for containerized application development and deployment. ## Key Points - Always set resource `requests` and `limits` on every container to prevent noisy-neighbor problems and enable the scheduler to place pods effectively. - Use liveness and readiness probes to let Kubernetes automatically restart unhealthy pods and route traffic only to ready ones. - Store configuration in ConfigMaps and sensitive values in Secrets rather than baking them into images. - Setting memory limits too low, causing OOMKill, or too high, causing cluster underutilization; profile your application under load to find appropriate values. - Omitting readiness probes, which causes Kubernetes to send traffic to pods that are still starting up and not ready to serve requests.
skilldb get containerization-skills/Kubernetes BasicsFull skill: 235 linesKubernetes Basics — Containerization
You are an expert in Kubernetes core concepts for containerized application development and deployment.
Overview
Kubernetes (K8s) is a container orchestration platform that automates deployment, scaling, and management of containerized workloads. It provides declarative configuration, self-healing, service discovery, and rolling updates across clusters of nodes.
Core Concepts
Pods
A Pod is the smallest deployable unit — one or more containers sharing network and storage:
apiVersion: v1
kind: Pod
metadata:
name: web-app
labels:
app: web
spec:
containers:
- name: app
image: myapp/web:1.4.0
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /ready
port: 8080
periodSeconds: 5
Deployments
Deployments manage ReplicaSets and provide declarative updates, rollbacks, and scaling:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: web
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: web
spec:
containers:
- name: app
image: myapp/web:1.4.0
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: web-config
- secretRef:
name: web-secrets
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
Services
Services provide stable networking endpoints for a set of Pods:
apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web-app-public
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 443
targetPort: 8080
ConfigMaps and Secrets
apiVersion: v1
kind: ConfigMap
metadata:
name: web-config
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
---
apiVersion: v1
kind: Secret
metadata:
name: web-secrets
type: Opaque
stringData:
DATABASE_URL: "postgres://user:pass@db:5432/app"
Namespaces
Namespaces partition cluster resources for multi-team or multi-environment isolation:
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
env: staging
Implementation Patterns
Ingress for HTTP Routing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
Rolling Back a Deployment
# Check rollout history
kubectl rollout history deployment/web-app
# Roll back to previous revision
kubectl rollout undo deployment/web-app
# Roll back to a specific revision
kubectl rollout undo deployment/web-app --to-revision=3
Best Practices
- Always set resource
requestsandlimitson every container to prevent noisy-neighbor problems and enable the scheduler to place pods effectively. - Use liveness and readiness probes to let Kubernetes automatically restart unhealthy pods and route traffic only to ready ones.
- Store configuration in ConfigMaps and sensitive values in Secrets rather than baking them into images.
Core Philosophy
Kubernetes is a declarative system: you describe the desired state of your workloads, and the control plane continuously works to make reality match that description. This declarative model is Kubernetes' greatest strength, but it requires a shift in thinking from "run these commands in sequence" to "ensure this state exists." Every resource manifest is a contract between you and the cluster, and Kubernetes will enforce that contract through self-healing, rescheduling, and reconciliation loops.
Resource requests and limits are not optional annotations; they are the fundamental mechanism by which Kubernetes makes scheduling and stability decisions. Requests tell the scheduler how much capacity a pod needs to run, and limits tell the kubelet when to throttle or kill a pod that exceeds its allocation. Without requests, the scheduler places pods blindly. Without limits, a single misbehaving pod can consume all node resources and destabilize every other pod on that node. Setting accurate requests and limits requires profiling your application, not guessing.
Labels, selectors, and namespaces are the organizational building blocks of every Kubernetes deployment. Labels connect Deployments to Pods, Services to Pods, and HPA to Deployments. A consistent labeling strategy (using standard labels like app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/version) makes it possible to query, monitor, and manage resources across the cluster. Namespaces provide logical isolation for multi-team or multi-environment clusters. Invest in your labeling and namespace strategy early; retrofitting it later is painful.
Anti-Patterns
-
Running pods without resource requests and limits. Pods without resource specifications are "best effort" and will be the first to be evicted under memory pressure. They also prevent the scheduler from making intelligent placement decisions, leading to node hotspots and instability.
-
Using
kubectl applyfor one-off commands against production. Manually applying manifests from a developer's laptop bypasses version control, CI/CD, and peer review. All production changes should flow through a GitOps pipeline or a controlled deployment process. -
Deploying everything into the
defaultnamespace. Thedefaultnamespace provides no logical separation between teams, environments, or applications. Use dedicated namespaces with resource quotas and network policies to enforce boundaries. -
Skipping health probes. Without liveness probes, Kubernetes cannot detect and restart hung containers. Without readiness probes, the Service sends traffic to pods that are still starting up or temporarily unable to serve requests. Both probe types should be configured for every production container.
-
Hardcoding configuration in container images. Baking environment-specific values (database URLs, feature flags, log levels) into the image means rebuilding for every environment. Use ConfigMaps for non-sensitive configuration and Secrets for sensitive values, injected via environment variables or volume mounts.
Common Pitfalls
- Setting memory limits too low, causing OOMKill, or too high, causing cluster underutilization; profile your application under load to find appropriate values.
- Omitting readiness probes, which causes Kubernetes to send traffic to pods that are still starting up and not ready to serve requests.
Install this skill directly: skilldb add containerization-skills
Related Skills
Container Registries
Container registry setup, authentication, and image management for ECR, GCR, GHCR, and Docker Hub
Container Security
Container image scanning, runtime hardening, and security best practices for production workloads
Docker Compose
Docker Compose configuration for multi-service development, testing, and local orchestration
Docker Networking
Docker networking modes, custom networks, DNS resolution, and multi-host connectivity patterns
Dockerfile Optimization
Multi-stage builds, layer caching, and image size optimization for production Docker images
Helm Charts
Helm chart creation, templating, dependency management, and release lifecycle for Kubernetes