Home About Projects Blog Subscribe Login

Why I Don't Believe in Serverless

There are always servers. Someone else just runs them. And that someone else charges you 10x. When to use it anyway—and when to run your own metal.

Let me say it up front: serverless is a marketing term, not an architecture.

There are always servers. Someone just decided you don't need to see them. And in exchange for that abstraction, you pay a 10-100x markup on compute, accept hard scaling limits, and give up control over latency, cold starts, and vendor lock-in.

I say this as someone who's built infrastructure at scale for 20+ years. I've operated data centers, managed Kubernetes clusters, and yes—I've also shipped production systems on Lambda, Cloud Functions, and Vercel Edge. Serverless has its place. But it's not the default, and it's definitely not the future everyone pretends it is.

The Serverless Tax

Let's do the math.

AWS Lambda costs about $0.20 per million requests plus $0.0000166667 per GB-second of compute. Sounds cheap, right? Until you compare it to a dedicated EC2 instance.

A t3.medium (2 vCPU, 4GB RAM) costs ~$30/month. That's 720 hours of continuous uptime. If your Lambda functions run for an average of 500ms each, you'd need to process about 5 million requests per month to break even. Anything below that, and you're overpaying. Anything above that, and the Lambda bill explodes.

At 50 million requests per month (which isn't uncommon for a mid-scale API), you're paying ~$10,000/month on Lambda. The same workload on a few dedicated instances? Maybe $500/month.

That's a 20x markup for not managing servers.

The Hidden Costs

But the invoice isn't the only cost. There are second-order effects:

These aren't theoretical. I've seen all of them blow up production systems.

When Serverless Actually Makes Sense

Okay, so if I'm so anti-serverless, why do I still use it?

Because there are legitimate use cases:

1. True Variable Workloads
If your traffic genuinely spikes unpredictably—think webhook handlers, image processing pipelines, or scheduled batch jobs—serverless can save you from over-provisioning. You pay for what you use, and you don't waste capacity.

2. Prototyping and MVPs
If you're building a proof-of-concept and don't want to think about infra, serverless is a great accelerator. Ship fast, validate the idea, then migrate to real infrastructure if it works.

3. Glue Code and Automation
Small, infrequent tasks—like syncing data between APIs or triggering alerts—are perfect for serverless. You don't need a server running 24/7 to run a function once an hour.

4. Edge Compute (The Real Future)
This is where serverless actually shines. Running lightweight functions at the edge (Cloudflare Workers, Vercel Edge, Fastly Compute) gives you global latency wins that dedicated servers can't match. But that's not serverless in the traditional sense—it's a fundamentally different architecture.

When to Run Your Own Metal

Here's the rule I follow:

If your workload is predictable and sustained, run your own infrastructure.

APIs, databases, real-time services, background workers—these are all better on dedicated instances. You get:

And here's the thing: managing servers isn't that hard anymore. Docker, Kubernetes, and managed services (like AWS ECS or DigitalOcean App Platform) have made it trivial. You're not racking hardware in a data center. You're running a few commands and letting automation handle the rest.

The Middle Path

In practice, most real-world systems are hybrid:

This gives you cost efficiency where it matters, flexibility where you need it, and simplicity where it makes sense.

The Bottom Line

Serverless is a tool, not a religion. It has real benefits for specific workloads. But it's been over-hyped as the future of cloud by vendors who profit from you not doing the math.

The best infrastructure strategy isn't serverless-first or servers-only. It's economics-first. Look at your workload, measure your costs, and choose the architecture that maximizes performance per dollar.

Because at the end of the day, there are always servers. The only question is: who's paying for them, and how much?


Follow the journey

Subscribe to Lynk for daily insights on AI strategy, cybersecurity, and building in the age of AI.

Subscribe →