TestForge Blog
← All Posts

API Gateway Architecture — Designing the Microservices Entry Point

The role and design patterns of an API Gateway. Comparing Kong, AWS API Gateway, and Nginx, with practical setup for auth, rate limiting, routing, and circuit breaking.

TestForge Team ·

What Is an API Gateway?

The single entry point for a microservices system.

Client

API Gateway ← Auth/AuthZ, Rate Limit, Routing, Logging

┌───────────────────────────────────────┐
│ User Service │ Order Service │ Payment │
└───────────────────────────────────────┘

Without a Gateway, clients call services directly:

  • Auth logic duplicated in each service
  • Clients must know internal service addresses
  • CORS and Rate Limit configuration scattered

Core Features

FeatureDescription
RoutingURL pattern → backend service mapping
Auth/AuthZJWT validation, API key verification
Rate LimitingPer-IP/user request caps
Load BalancingDistribute load across instances
Circuit BreakerIsolate failing services
Logging/TracingDistributed tracing (Jaeger, Zipkin)
CachingCache frequently requested responses
TransformationRequest/response data transformation

Solution Comparison

KongAWS API GatewayNginxTraefik
TypeOpen sourceManagedOpen sourceOpen source
ConfigAdmin APIConsole/Terraformnginx.confYAML
PluginsRich ecosystemLimitedLua extensionsMiddleware
CostFree (OSS)Per-request billingFreeFree
K8s IntegrationKong Ingress-Nginx IngressNative

1. Kong Gateway Setup

# Kong + PostgreSQL (docker-compose)
services:
  kong-db:
    image: postgres:16
    environment:
      POSTGRES_DB: kong
      POSTGRES_USER: kong
      POSTGRES_PASSWORD: kong

  kong:
    image: kong:3.6
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-db
      KONG_ADMIN_LISTEN: 0.0.0.0:8001
    ports:
      - "8000:8000"   # Proxy
      - "8001:8001"   # Admin API
# Register a service
curl -X POST http://localhost:8001/services \
  -d name=user-service \
  -d url=http://user-service:8080

# Register a route
curl -X POST http://localhost:8001/services/user-service/routes \
  -d "paths[]=/api/users" \
  -d "methods[]=GET" \
  -d "methods[]=POST"

# JWT auth plugin
curl -X POST http://localhost:8001/services/user-service/plugins \
  -d name=jwt

# Rate limiting plugin
curl -X POST http://localhost:8001/plugins \
  -d name=rate-limiting \
  -d config.minute=100 \
  -d config.hour=10000 \
  -d config.policy=redis \
  -d config.redis_host=redis

2. Kubernetes Ingress + Nginx

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-gateway
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
    # Rate Limiting
    nginx.ingress.kubernetes.io/limit-rps: "100"
    nginx.ingress.kubernetes.io/limit-connections: "20"
    # CORS
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-origin: "https://testforge.kr"
    # JWT validation (external auth service)
    nginx.ingress.kubernetes.io/auth-url: "http://auth-service/validate"
    nginx.ingress.kubernetes.io/auth-signin: "https://testforge.kr/login"
spec:
  ingressClassName: nginx
  tls:
  - hosts: [api.testforge.kr]
    secretName: api-tls
  rules:
  - host: api.testforge.kr
    http:
      paths:
      - path: /api/users(/|$)(.*)
        pathType: ImplementationSpecific
        backend:
          service: { name: user-service, port: { number: 8080 } }
      - path: /api/orders(/|$)(.*)
        pathType: ImplementationSpecific
        backend:
          service: { name: order-service, port: { number: 8080 } }

3. Circuit Breaker Pattern

@CircuitBreaker(name = "payment", fallbackMethod = "paymentFallback")
public PaymentResponse processPayment(PaymentRequest request) {
    return paymentClient.pay(request);
}

public PaymentResponse paymentFallback(PaymentRequest request, Exception e) {
    // On failure, queue for later processing
    pendingQueue.add(request);
    return PaymentResponse.pending("Your payment is being processed");
}

4. Distributed Tracing

# Kong OpenTelemetry plugin
curl -X POST http://localhost:8001/plugins \
  -d name=opentelemetry \
  -d config.endpoint=http://jaeger:4318/v1/traces \
  -d config.resource_attributes.service.name=api-gateway

An X-Trace-Id header is automatically added to every request, enabling end-to-end flow visualization in Jaeger.

Design Principles

  1. Keep the Gateway thin: Business logic belongs in backend services; the Gateway handles cross-cutting concerns only
  2. Fault isolation: One failing service must not take down the entire Gateway
  3. API versioning: Separate API versions via /v1/, /v2/ paths
  4. Security: Internal services must not be directly reachable from outside (NetworkPolicy)
  5. Observability: Log all requests/responses + track latency

Routing Strategy

/api/v1/users/**  → user-service:8080
/api/v1/orders/** → order-service:8080
/api/v1/pay/**    → payment-service:8080  (TLS enforced)
/api/internal/**  → blocked from external access (403)
/health           → bypass auth (health check)