TestForge Blog
← All Posts

Kubernetes Dev & Ops in Practice 4 — Network Design (Service / Ingress / NetworkPolicy)

A practical guide to Kubernetes Service types, Ingress configuration, and controlling Pod-to-Pod traffic with NetworkPolicy. Design cluster-internal and external traffic flows with confidence.

TestForge Team ·

Kubernetes Networking Fundamentals

Kubernetes networking follows two core rules:

  1. All Pods can communicate with each other — by default, any Pod in the cluster can reach any other Pod directly
  2. Pod IPs change on restart — Services provide stable endpoints

Understanding this model makes it clear why Services, Ingress, and NetworkPolicy exist.


1. Choosing a Service Type

ClusterIP — Internal Communication (Default)

apiVersion: v1
kind: Service
metadata:
  name: api-service
  namespace: production
spec:
  type: ClusterIP
  selector:
    app: api-server
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP

Accessible only within the cluster. Use this as the default for all internal services.

NodePort — External Access via Node Port (Dev/Test Only)

spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 30080  # Range: 30000-32767

Accessible on port 30080 of any node. Use only for local testing or on-premises environments without a load balancer — not in production.

LoadBalancer — Cloud Load Balancer Integration

spec:
  type: LoadBalancer
  ports:
    - port: 443
      targetPort: 8443

Cloud providers (AWS ELB, GCP LB, etc.) automatically provision a load balancer. Since each Service creates a separate load balancer, use Ingress when exposing multiple services to avoid high costs.

Service Type Decision Guide

No external access needed      → ClusterIP
Multiple HTTP/HTTPS services   → Ingress + ClusterIP
Single TCP/UDP service         → LoadBalancer
Local dev or testing           → NodePort

2. Ingress — HTTP Traffic Routing

Ingress is an L7 gateway that routes multiple services through a single load balancer.

Basic Ingress Configuration (nginx)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.example.com
      secretName: api-tls-secret
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /api/v1
            pathType: Prefix
            backend:
              service:
                name: api-v1-service
                port:
                  number: 80
          - path: /api/v2
            pathType: Prefix
            backend:
              service:
                name: api-v2-service
                port:
                  number: 80

Automating TLS with cert-manager

# ClusterIssuer (Let's Encrypt)
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx
# Add annotation to Ingress for automatic certificate issuance
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - api.example.com
      secretName: api-tls  # cert-manager creates this automatically

Advanced Ingress — Rate Limiting & Timeouts

metadata:
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "100"
    nginx.ingress.kubernetes.io/limit-connections: "20"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"

3. NetworkPolicy — Controlling Pod-to-Pod Traffic

By default, Kubernetes allows all Pod-to-Pod communication. NetworkPolicy enforces an explicit whitelist, passing only traffic you’ve approved.

Default Policy — Deny All Ingress

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: production
spec:
  podSelector: {}  # All Pods in the namespace
  policyTypes:
    - Ingress       # Block inbound only; outbound still allowed

API Server — Allow Only Specific Sources

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-server-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress-nginx
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: postgres
      ports:
        - protocol: TCP
          port: 5432
    - to:
        - podSelector:
            matchLabels:
              app: redis
      ports:
        - protocol: TCP
          port: 6379
    # Allow DNS — without this, domain lookups will fail
    - to:
        - namespaceSelector: {}
      ports:
        - protocol: UDP
          port: 53

Cross-Namespace Communication

# Allow payment-service → auth-service
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-payment
  namespace: auth-service
spec:
  podSelector:
    matchLabels:
      app: auth-api
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: payment-service
          podSelector:
            matchLabels:
              app: payment-api
      ports:
        - port: 8080

4. DNS and Service Discovery

Kubernetes uses CoreDNS for service discovery.

<service-name>.<namespace>.svc.cluster.local
# Same namespace: service name is sufficient
curl http://api-service/health

# Cross-namespace: use FQDN
curl http://api-service.production.svc.cluster.local/health

# Direct StatefulSet Pod access
curl http://redis-0.redis-headless.production.svc.cluster.local:6379

ndots Tuning — DNS Performance

spec:
  dnsConfig:
    options:
      - name: ndots
        value: "2"  # Reduce from default 5 to cut unnecessary lookups

5. Full Traffic Flow

External request

[Cloud Load Balancer]

[Ingress Controller (nginx)]
    ↓ (TLS termination, routing)
[ClusterIP Service]

[Pod] → [NetworkPolicy check] → [DB Service] → [DB Pod]

Operational Checklist

# Verify Service endpoints (is a Pod connected?)
kubectl get endpoints api-service -n production

# Inspect Ingress status
kubectl describe ingress api-ingress -n production

# List NetworkPolicies
kubectl get networkpolicy -n production

# Test DNS resolution
kubectl run -it --rm dns-test --image=busybox --restart=Never -- \
  nslookup api-service.production.svc.cluster.local

# Test inter-service connectivity
kubectl run -it --rm curl-test --image=curlimages/curl --restart=Never -- \
  curl http://api-service.production.svc.cluster.local/health

Summary

ComponentRoleWhen to Use
ClusterIPStable internal endpointAll internal services by default
LoadBalancerCloud LB integrationSingle TCP/UDP service for external access
IngressHTTP routing + TLSMultiple HTTP services for external access
NetworkPolicyTraffic whitelistExplicit control of inter-service communication

Next: Helm in Practice — Chart structure, environment separation with values files, and deployment strategies.