Back to Guides
Container Orchestration with Kubernetes
Complete guide to deploying and managing containerized applications with Kubernetes in production.
20 min read
Advanced LevelDevOpsWhat You'll Learn
- • Kubernetes architecture and core concepts
- • Deploying applications with Deployments, Services, and Ingress
- • Managing configuration with ConfigMaps and Secrets
- • Auto-scaling with HPA and VPA
- • Monitoring and observability with Prometheus and Grafana
- • Production deployment strategies and best practices
Introduction to Kubernetes
Kubernetes (K8s) is a container orchestration platform that automates deployment, scaling, and management of containerized applications. It provides declarative configuration, self-healing, and horizontal scaling capabilities for production workloads.
Core Benefits
- Automated Deployment: Declarative configuration with rollback capabilities
- Auto-Scaling: Horizontal and vertical scaling based on metrics
- Self-Healing: Automatic restart of failed containers and nodes
- Load Balancing: Built-in service discovery and load distribution
- Zero-Downtime Deployments: Rolling updates with health checks
Kubernetes Architecture Overview
Control Plane
- • API Server (kube-apiserver)
- • etcd (cluster data store)
- • Controller Manager
- • Scheduler
Worker Nodes
- • kubelet (node agent)
- • kube-proxy (networking)
- • Container Runtime (Docker/containerd)
- • Pod (smallest deployable unit)
Basic Application Deployment
# Basic Application Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: web-app-service
spec:
selector:
app: web-app
ports:
- port: 80
targetPort: 80
type: ClusterIPConfiguration Management
ConfigMaps for Application Configuration
# ConfigMap for Application Settings
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgresql://db-service:5432/myapp"
redis_url: "redis://redis-service:6379"
log_level: "info"
features.json: |
{
"feature_flags": {
"new_ui": true,
"analytics": false
}
}
---
# Using ConfigMap in Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: app
image: myapp:v1.0.0
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: log_level
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-configSecrets for Sensitive Data
# Create Secret from command line
kubectl create secret generic app-secrets --from-literal=db-password=secretpassword --from-literal=api-key=yourapikey123
# Secret YAML (base64 encoded values)
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
db-password: c2VjcmV0cGFzc3dvcmQ=
api-key: eW91cmFwaWtleTEyMw==
---
# Using Secrets in Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: app
image: myapp:v1.0.0
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: db-password
- name: API_KEY
valueFrom:
secretKeyRef:
name: app-secrets
key: api-keyNetworking and Ingress
Service Types and Load Balancing
ClusterIP
Internal cluster communication
NodePort
External access via node port
LoadBalancer
Cloud provider load balancer
Ingress Controller Setup
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
# Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myapp.example.com
secretName: app-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080Auto-scaling Strategies
Horizontal Pod Autoscaler (HPA)
# HPA based on CPU and Memory
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 100
periodSeconds: 15Vertical Pod Autoscaler (VPA)
# VPA for automatic resource adjustment
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: web-app-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: web-app
maxAllowed:
cpu: 1
memory: 1Gi
minAllowed:
cpu: 100m
memory: 128MiMonitoring and Observability
Prometheus and Grafana Setup
# Install Prometheus using Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace --set grafana.adminPassword=admin123 --set prometheus.prometheusSpec.retention=30d
# ServiceMonitor for custom metrics
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: web-app-metrics
namespace: monitoring
spec:
selector:
matchLabels:
app: web-app
endpoints:
- port: metrics
path: /metrics
interval: 30sApplication Health Checks
Health Check Best Practices
- • Implement both liveness and readiness probes
- • Use separate endpoints for different probe types
- • Include dependency checks in readiness probes
- • Set appropriate timeout and failure thresholds
Package Management with Helm
Creating a Helm Chart
# Create new Helm chart
helm create myapp
# Chart.yaml
apiVersion: v2
name: myapp
description: A Helm chart for myapp
type: application
version: 0.1.0
appVersion: "1.0.0"
# values.yaml
replicaCount: 3
image:
repository: myapp
pullPolicy: IfNotPresent
tag: "v1.0.0"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: myapp.example.com
paths:
- path: /
pathType: Prefix
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
# Install chart
helm install myapp ./myapp --values production-values.yamlProduction Deployment Strategies
Rolling Updates
# Rolling update configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 6
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
spec:
containers:
- name: web-app
image: myapp:v2.0.0
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3Blue-Green Deployment
# Blue-Green deployment with Argo Rollouts
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: web-app
spec:
replicas: 5
strategy:
blueGreen:
activeService: web-app-active
previewService: web-app-preview
autoPromotionEnabled: false
scaleDownDelaySeconds: 300
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: myapp:v2.0.0Production-Ready Examples
Best Practices Summary
Production Readiness Checklist
- • Set appropriate resource requests and limits
- • Implement proper health checks and monitoring
- • Use namespaces for environment separation
- • Secure secrets and configure RBAC
- • Set up automated backups and disaster recovery
- • Implement network policies for security
- • Use Helm charts for reproducible deployments
- • Monitor cluster and application metrics
Common Mistakes to Avoid
- • Running containers as root user
- • Missing resource limits causing node exhaustion
- • Storing secrets in container images
- • Not implementing proper logging and monitoring
- • Using latest tags in production
- • Ignoring security contexts and pod security policies