Table of Contents
- What is Kubernetes Ingress?
- Ingress Controller Basics
- How Ingress Controllers Work
- Popular Ingress Controllers
- Choosing the Right Ingress Controller
- Installation and Configuration (Step-by-Step)
- Advanced Use Cases
- Best Practices
- Troubleshooting Common Issues
- Conclusion
- References
1. What is Kubernetes Ingress?
Kubernetes Ingress is an API resource that defines rules for routing external HTTP/HTTPS traffic to internal Services. Think of it as a “reverse proxy configuration” for your cluster. Without Ingress, you’d need to expose each Service individually (e.g., via LoadBalancer), leading to higher costs and operational complexity.
Key Features of Ingress:
- HTTP/HTTPS Routing: Route traffic based on paths (e.g.,
/api→api-service) or hostnames (e.g.,app.example.com→web-service). - SSL Termination: Decrypt HTTPS traffic at the edge, so backend Services only handle HTTP.
- Name-Based Virtual Hosting: Host multiple domains (e.g.,
app1.example.com,app2.example.com) on a single IP. - Load Balancing: Distribute traffic across backend Service pods.
Example Ingress Resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # Specify the Ingress Controller class
rules:
- host: app.example.com # Route traffic for this hostname
http:
paths:
- path: / # Route all paths under / to web-service
pathType: Prefix
backend:
service:
name: web-service # Target Service name
port:
number: 80 # Target Service port
2. Ingress Controller Basics
An Ingress Controller is a daemon that implements the Ingress API. Unlike other Kubernetes resources (e.g., Deployments, Services), the Ingress resource itself is just a set of rules—it does nothing until an Ingress Controller processes it.
Why You Need an Ingress Controller:
- Ingress resources are declarative; controllers are the “executors” that enforce these rules.
- Controllers dynamically update routing logic as Ingress resources change (no manual restarts).
- They handle low-level details like generating proxy configurations (e.g., NGINX config files) or communicating with cloud load balancers (e.g., AWS ALB).
How Ingress Controllers Are Deployed:
- Typically run as a Deployment or DaemonSet in the cluster.
- Exposed externally via a
LoadBalancerService (cloud environments) orNodePort(on-prem). - Use
ingressClassName(Kubernetes 1.18+) to associate Ingress resources with a specific controller (e.g.,nginx,traefik).
3. How Ingress Controllers Work
Here’s a high-level workflow for traffic routing with an Ingress Controller:
- External Traffic Arrives: Traffic from the internet hits the Ingress Controller’s external IP (via its
LoadBalancerService). - Controller Processes Rules: The Ingress Controller watches the Kubernetes API for Ingress resources and parses their rules.
- Routes to Backend Services: Based on the rules, the controller forwards traffic to the appropriate backend Service, which then routes to pods.
Example Workflow with NGINX:
- The NGINX Ingress Controller generates an
nginx.conffile from Ingress rules. - When you update an Ingress resource, the controller regenerates
nginx.confand reloads NGINX. - Traffic flows:
Client → LoadBalancer IP → NGINX Pod → Backend Service → App Pod.
4. Popular Ingress Controllers
Let’s compare the most widely used Ingress Controllers, their features, and ideal use cases.
4.1 NGINX Ingress Controller
Maintained by: F5 (formerly NGINX Inc.)
GitHub: kubernetes/ingress-nginx
Key Features:
- Supports HTTP/HTTPS, TCP, UDP, and gRPC.
- Advanced traffic management: rate limiting, request rewriting, session affinity.
- TLS termination with integration for Cert-Manager (auto-renew Let’s Encrypt certs).
- Extensible via custom annotations (e.g.,
nginx.ingress.kubernetes.io/rewrite-target).
Pros:
- Mature, battle-tested, and widely adopted (80%+ of production clusters).
- Rich documentation and community support.
- Works in any environment (cloud, on-prem, edge).
Cons:
- Configuration can be verbose (relies on annotations for advanced features).
- No built-in UI (use external tools like Grafana for monitoring).
Best For:
Most production use cases, especially where flexibility and community support are critical.
4.2 Traefik
Maintained by: Traefik Labs
GitHub: traefik/traefik
Key Features:
- Dynamic Configuration: Auto-discovers Ingress resources, Services, and even Docker containers.
- Built-in dashboard for monitoring routes and health.
- Native support for Let’s Encrypt, middleware (e.g., CORS, redirects), and service meshes (e.g., Istio).
- Declarative configuration via Kubernetes CRDs (e.g.,
IngressRoutefor advanced routing).
Pros:
- Easier to set up than NGINX for beginners.
- Modern UI and CLI for debugging.
- Cloud-native design (no legacy proxy under the hood).
Cons:
- Smaller community than NGINX (fewer examples for edge cases).
- Enterprise features (e.g., SSO, WAF) require a paid license.
Best For:
Teams prioritizing developer experience, dynamic environments, or modern UIs.
4.3 HAProxy Ingress Controller
Maintained by: HAProxy Technologies
GitHub: haproxytech/kubernetes-ingress
Key Features:
- High Performance: Low latency and high throughput (optimized for TCP/UDP).
- Advanced load-balancing algorithms (least connections, round-robin, source IP hash).
- Built-in metrics (Prometheus endpoint) and logging.
Pros:
- Superior performance for high-traffic workloads (e.g., APIs, real-time apps).
- Rich TCP/UDP support (better than NGINX for non-HTTP traffic).
Cons:
- Smaller ecosystem than NGINX; fewer third-party integrations.
Best For:
Performance-critical applications (e.g., financial services, gaming) or heavy TCP/UDP traffic.
4.4 AWS Load Balancer Controller
Maintained by: AWS
GitHub: kubernetes-sigs/aws-load-balancer-controller
Key Features:
- Provisions AWS ALBs/NLBs directly from Kubernetes resources.
- Integrates with AWS services: WAF, Shield, Certificate Manager (ACM), and Target Groups.
- Supports AWS-specific features: IP-based routing, AWS PrivateLink, and IAM roles for Service accounts (IRSA).
Pros:
- Native AWS integration (no need to manage external load balancers).
- Automatically scales ALBs with cluster growth.
Cons:
- AWS-only (not portable to other clouds or on-prem).
- Limited to ALB/NLB feature sets (e.g., no built-in rate limiting).
Best For:
EKS clusters where you want to leverage AWS-managed load balancers.
5. Choosing the Right Ingress Controller
Use this decision framework to pick the best controller for your needs:
| Factor | NGINX | Traefik | HAProxy | AWS ALB |
|---|---|---|---|---|
| Cloud Agnostic | ✅ Yes | ✅ Yes | ✅ Yes | ❌ AWS-only |
| Ease of Setup | ⚠️ Moderate | ✅ Easy | ⚠️ Moderate | ⚠️ Moderate (EKS) |
| Performance | Good | Good | ✅ Excellent | Good (AWS-managed) |
| Advanced Features | ✅ Annotations | ✅ CRDs + UI | ✅ TCP/UDP focus | ✅ AWS integrations |
| Community Support | ✅ Largest | ⚠️ Growing | ⚠️ Niche | ✅ AWS-backed |
Key Questions to Ask:
- Where is your cluster hosted? Use cloud-specific controllers (e.g., AWS ALB) for managed integrations.
- What protocols do you need? HAProxy for TCP/UDP; NGINX/Traefik for HTTP/HTTPS.
- Do you need a UI? Traefik has a built-in dashboard.
- Is portability critical? Avoid cloud-specific controllers (e.g., AWS ALB) if you might migrate clusters.
6. Installation and Configuration (Step-by-Step)
Let’s walk through installing the NGINX Ingress Controller (the most popular choice) and configuring a basic route. We’ll use Helm for simplicity, as it streamlines installation and upgrades.
6.1 Installing NGINX Ingress Controller with Helm
Prerequisites:
- A Kubernetes cluster (v1.19+ recommended).
- Helm 3 installed locally.
Step 1: Add the NGINX Ingress Helm Repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Step 2: Install the Controller
For cloud clusters (GKE, EKS, AKS), use a LoadBalancer Service to expose the controller externally:
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=LoadBalancer
For on-prem clusters, use NodePort instead:
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=NodePort
Step 3: Verify Installation
Check that the controller pods are running:
kubectl get pods -n ingress-nginx
# Output should show a pod like nginx-ingress-controller-xxxxxxxxx-xxxxx (Running)
Get the external IP (cloud clusters) or NodePort (on-prem):
kubectl get svc -n ingress-nginx nginx-ingress-controller
# Look for EXTERNAL-IP (e.g., a56b2d1e8f7c4.us-west-2.elb.amazonaws.com)
6.2 Deploying a Sample Application
Let’s deploy a simple web app to test routing. Create a web-app.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 2
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: nginx:alpine # Simple NGINX web server
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-app # Match pods with label app=web-app
ports:
- port: 80
targetPort: 80
type: ClusterIP # Internal-only Service (no external exposure)
Apply it:
kubectl apply -f web-app.yaml
6.3 Creating an Ingress Resource
Create an Ingress resource to route traffic to web-service. Save as ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
spec:
ingressClassName: nginx # Use the NGINX controller
rules:
- host: app.example.com # Replace with your domain (or use /etc/hosts for testing)
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Apply it:
kubectl apply -f ingress.yaml
Verify the Ingress is created:
kubectl get ingress
# Output:
# NAME CLASS HOSTS ADDRESS PORTS AGE
# web-app-ingress nginx app.example.com <EXTERNAL-IP> 80 5m
6.4 Testing the Ingress
To test, map app.example.com to the controller’s external IP in your /etc/hosts file (or use a real domain with DNS):
# Example: Add this line to /etc/hosts (replace <EXTERNAL-IP> with your controller's IP)
<EXTERNAL-IP> app.example.com
Now curl the domain:
curl app.example.com
# Output: NGINX welcome page (from the web-app pods)
7. Advanced Use Cases
7.1 TLS Termination with Cert-Manager
Secure your Ingress with HTTPS using Cert-Manager, which automates Let’s Encrypt certificate issuance.
Step 1: Install Cert-Manager
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
Step 2: Create a ClusterIssuer (Let’s Encrypt)
Save as cluster-issuer.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected] # Replace with your email
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx # Use NGINX for HTTP-01 challenges
Apply it:
kubectl apply -f cluster-issuer.yaml
Step 3: Update Ingress to Use TLS
Modify your Ingress resource to include tls:
spec:
tls:
- hosts:
- app.example.com # Domain to secure
secretName: app-tls # Secret to store the certificate
rules:
- host: app.example.com
# ... (existing rules)
Cert-Manager will automatically issue a certificate and store it in the app-tls Secret.
7.2 Path-Based Routing
Route traffic to different Services based on URL paths (e.g., /api → api-service, /web → web-service):
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
7.3 Canary Deployments
Route a percentage of traffic to a “canary” Service for testing (NGINX-specific):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress
annotations:
nginx.ingress.kubernetes.io/canary: "true" # Enable canary
nginx.ingress.kubernetes.io/canary-weight: "30" # Route 30% traffic to canary
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service-canary # Canary Service
port:
number: 80
8. Best Practices
1. Use ingressClassName
Always specify ingressClassName to avoid ambiguity (especially if multiple controllers are running).
2. Secure Ingress Controllers
- Restrict controller access with NetworkPolicies.
- Use least-privilege ServiceAccounts for controllers.
- Regularly update controllers to patch security vulnerabilities.
3. Monitor Logs and Metrics
- Ingress controllers generate logs with request details (enable via
--v=2for NGINX). - Scrape metrics (e.g., NGINX provides Prometheus endpoints) to track latency, error rates, and throughput.
4. Avoid Overly Complex Rules
- Split large Ingress resources into smaller, focused ones (e.g., one per microservice).
- Use
pathType: Exactinstead ofPrefixwhen possible to avoid unintended routing.
5. Test Ingress Rules
- Use
kubectl describe ingress <name>to validate rules. - Test TLS with
openssl s_client -connect app.example.com:443.
9. Troubleshooting Common Issues
Issue: Ingress Resource Shows “No Address”
Cause: The Ingress Controller isn’t running or the LoadBalancer Service failed to provision an IP.
Fix:
- Check controller pods:
kubectl get pods -n ingress-nginx. - Check controller logs:
kubectl logs -n ingress-nginx <pod-name>.
Issue: 404 Errors When Accessing the Ingress
Cause: Mismatched Service name/port, or pathType misconfiguration.
Fix:
- Verify the backend Service exists:
kubectl get svc <service-name>. - Check Ingress events:
kubectl describe ingress <ingress-name>(look for “endpoints not found”).
Issue: TLS Certificate Not Issued
Cause: Cert-Manager isn’t running, or the Let’s Encrypt challenge failed.
Fix:
- Check Cert-Manager pods:
kubectl get pods -n cert-manager. - Check certificate status:
kubectl describe certificate app-tls.
10. Conclusion
Ingress Controllers are the backbone of external traffic management in Kubernetes. They simplify routing, enhance security, and reduce operational overhead compared to exposing Services directly.
By choosing the right controller (e.g., NGINX for flexibility, Traefik for developer experience, AWS ALB for EKS), configuring TLS, and following best practices, you can build a robust, scalable entry point to your Kubernetes applications.
Start small with a basic Ingress setup, then layer in advanced features like canary deployments or WAF integration as your needs grow.