Skip to content

Kubernetes Deployment

  • Enterprise production - Millions of verifications/day
  • High availability - 99.9% uptime requirements
  • Auto-scaling - Handles traffic spikes automatically
  • Multi-region - Global deployment across data centers
  • Large teams - DevOps teams with Kubernetes expertise
  • Kubernetes cluster (AWS EKS, Google GKE, Azure AKS)
  • DevOps team with Kubernetes experience
  • Budget: $500-5000+/month (depending on scale)
Terminal window
# Deploy complete VoP stack to Kubernetes
kubectl apply -f k8s/
# Check everything is running
kubectl get pods -n vop-system
# Expected output:
# vop-service-xxx 1/1 Running 0 2m
# vop-service-yyy 1/1 Running 0 2m
# vop-service-zzz 1/1 Running 0 2m
Terminal window
# Forward port to access locally
kubectl port-forward svc/vop-service 8443:443 -n vop-system
# Service available at: https://localhost:8443
Internet → Load Balancer → Kubernetes Cluster
├── VoP Pods (3+ instances)
├── Database Cluster
├── Redis Cluster
└── Monitoring Stack
  • High Availability - 99.9% uptime with multi-zone deployment
  • Auto-scaling - Automatically adds/removes instances based on load
  • Load Balancing - Distributes traffic across multiple instances
  • Health Monitoring - Automatic restart of failed instances
  • Rolling Updates - Zero-downtime deployments
  • Resource Management - Efficient CPU and memory usage
  • Network isolation - Micro-segmentation between services
  • Pod security policies - Container-level security controls
  • RBAC - Role-based access control
  • Secret management - Encrypted storage of certificates and keys
  • Audit logging - Complete audit trail of all activities
Terminal window
# Install cert-manager (for SSL certificates)
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
# Install NGINX Ingress (for load balancing)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
# 3 instances for high availability
replicas: 3
# Rolling updates with zero downtime
strategy:
type: RollingUpdate
maxSurge: 1
maxUnavailable: 0
# Resource limits
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# Horizontal Pod Autoscaler
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 70
# Scales up when CPU > 70%
# Scales down when CPU < 70%
# Run as non-root user
securityContext:
runAsNonRoot: true
runAsUser: 1000
readOnlyRootFilesystem: true
# Drop all capabilities
capabilities:
drop:
- ALL
Terminal window
# Check service health
kubectl get pods -n vop-system
# View service logs
kubectl logs -f deployment/vop-service -n vop-system
# Check resource usage
kubectl top pods -n vop-system
# View service metrics
kubectl port-forward svc/vop-service 9090:9090 -n vop-system
# Visit: http://localhost:9090/metrics
  • Service Down - Alert when any instance fails
  • High Error Rate - Alert when error rate > 5%
  • High CPU Usage - Alert when CPU > 80%
  • Memory Usage - Alert when memory > 90%
  • Response Time - Alert when response time > 1s
Terminal window
# Deploy VoP service
kubectl apply -f k8s/
# Update deployment (zero downtime)
kubectl set image deployment/vop-service vop-service=vop-service:v2.0 -n vop-system
# Scale service manually
kubectl scale deployment vop-service --replicas=5 -n vop-system
# Check deployment status
kubectl rollout status deployment/vop-service -n vop-system
# Rollback if needed
kubectl rollout undo deployment/vop-service -n vop-system
Terminal window
# Check all services
kubectl get all -n vop-system
# View recent events
kubectl get events -n vop-system --sort-by='.lastTimestamp'
# Check service endpoints
kubectl get endpoints -n vop-system
# Test connectivity
kubectl run test --image=busybox -it --rm --restart=Never -- \
wget -qO- https://vop-service.vop-system.svc.cluster.local/health
  • Response time: < 200ms average
  • Throughput: Unlimited (auto-scaling)
  • Uptime: 99.9% (with multi-zone deployment)
  • Concurrent users: 10,000+ simultaneous connections
Traffic Load → Instances → Response Time
Low (< 1000/min) → 3 pods → < 100ms
Medium (5000/min) → 6 pods → < 150ms
High (20000/min) → 15 pods → < 200ms
Peak (50000/min) → 20 pods → < 300ms
Terminal window
# Set resource requests appropriately
resources:
requests:
memory: "256Mi" # Start small
cpu: "250m" # Scale up as needed
limits:
memory: "512Mi" # Prevent memory leaks
cpu: "500m" # Limit CPU usage
Terminal window
# Create EKS cluster
eksctl create cluster --name vop-cluster --region us-west-2
# Deploy VoP
kubectl apply -f k8s/
# Setup load balancer
kubectl apply -f k8s/aws-loadbalancer.yaml
Terminal window
# Create GKE cluster
gcloud container clusters create vop-cluster --zone us-central1-a
# Deploy VoP
kubectl apply -f k8s/
# Setup ingress
kubectl apply -f k8s/gcp-ingress.yaml
Terminal window
# Create AKS cluster
az aks create --resource-group vop-rg --name vop-cluster
# Deploy VoP
kubectl apply -f k8s/
# Setup ingress
kubectl apply -f k8s/azure-ingress.yaml
Terminal window
# Check pod status
kubectl get pods -n vop-system
# Check pod details
kubectl describe pod <pod-name> -n vop-system
# Common issues:
# - Image pull errors: Check image name and registry access
# - Resource limits: Increase memory/CPU limits
# - Config errors: Check ConfigMap and Secrets
Terminal window
# Check service
kubectl get svc -n vop-system
# Check ingress
kubectl get ingress -n vop-system
# Test internal connectivity
kubectl run debug --image=busybox -it --rm --restart=Never -- \
wget -qO- http://vop-service.vop-system.svc.cluster.local:8443/health
Terminal window
# Check resource usage
kubectl top pods -n vop-system
# Check if auto-scaling is working
kubectl get hpa -n vop-system
# View detailed metrics
kubectl describe hpa vop-hpa -n vop-system
Terminal window
# Check certificate status
kubectl get certificates -n vop-system
# Check cert-manager logs
kubectl logs -n cert-manager deployment/cert-manager
# Manually trigger certificate renewal
kubectl delete certificate vop-tls-cert -n vop-system
  • RBAC enabled - Role-based access control configured
  • Network policies - Pod-to-pod communication restricted
  • Pod security policies - Container security standards enforced
  • Secrets management - Certificates stored in Kubernetes secrets
  • Image scanning - Container images scanned for vulnerabilities
  • Resource limits - CPU and memory limits set
  • Non-root containers - All containers run as non-root users
  • Read-only filesystem - Container filesystems are read-only
  1. Prepare Kubernetes cluster - Set up EKS/GKE/AKS
  2. Test deployment - Deploy to staging environment
  3. Migrate data - Export/import database and certificates
  4. Switch traffic - Update DNS to point to Kubernetes
  5. Monitor - Ensure everything works correctly
  6. Cleanup - Shut down old Docker environment
  • Kubernetes cluster ready - EKS/GKE/AKS configured
  • VoP deployed to K8s - All services running
  • Database migrated - Data exported/imported
  • Certificates migrated - SSL certificates working
  • DNS updated - Traffic routing to K8s
  • Monitoring configured - Alerts and dashboards setup
  • Load testing completed - Performance validated
  • Rollback plan ready - Can revert if needed