Home About Projects Blog Subscribe Login

The API Gateway Pattern (And Why Everyone Gets It Wrong)

Most API gateways are overengineered choke points that add latency and failure modes. Here is the minimal viable architecture that actually scales—and when you don’t need a gateway at all.

The Architect’s Trap

In the early days of microservices, we were told the API Gateway was the holy grail. It was the entry point, the traffic cop, the security guard, and the translator all rolled into one. It sounded perfect on a whiteboard: centralize your cross-cutting concerns, and let your developers focus on the business logic. But for many, the API Gateway became a bottleneck—a monolithic choke point that added latency, complexity, and a single point of failure that mirrored the very problems microservices were supposed to solve.

The Multi-Layered Latency Tax

Every hop in your network costs you. When you place a heavy gateway (like an over-configured Istio or a legacy Kong cluster) in front of your services, you’re adding a processing layer that has to parse headers, run middleware, check auth, and rebuild the request before it even reaches your load balancer. At Link11, we’ve seen cases where the gateway overhead accounted for 40% of the total request time. In 2026, when micro-latency is the difference between a fluid user experience and a sluggish one, this is architectural malpractice.

When the Gateway is a Monolith in Disguise

The biggest mistake I see engineering teams make is treating the gateway as a dump for every feature they don’t want to write in a service. Rate limiting? Put it in the gateway. Payload transformation? Gateway. Response caching? Gateway. Suddenly, your gateway config is a 5,000-line YAML file that only two people understand, and a single typo can take down your entire production environment. You haven’t escaped the monolith; you’ve just moved it from the application layer to the network layer.

The Minimal Viable Architecture

A modern, resilient architecture doesn’t need a giant gateway. Here is the pragmatic approach we use:

Knowing When to Walk Away

If you have fewer than 10 microservices and your team is small, you probably don’t need a dedicated API Gateway at all. A simple Cloudflare Worker or a properly configured Kubernetes Ingress is often more than enough. The more moving parts you add, the more surface area you give to attackers and outages. In the world of infrastructure, less isn’t just more—it’s safer.

Build for scale, but don’t build for a future that hasn’t happened yet. Keep your pipes dumb and your endpoints smart. That is how you build a system that actually stays online when the world tries to take it down.


Follow the journey

Subscribe to Lynk for daily insights on AI strategy, cybersecurity, and building in the age of AI.

Subscribe →