Serverless Is Amazing, But Here’s the Big Problem

Mika Yeap

First it was Docker. Then microservices. Then Kubernetes. And now, serverless. Servers are a pain in the neck, right? What web developer wants to learn Apache Linux to deploy a SPA? And that’s just the deployment. Maintenance is a different ballgame. Load balancing, security patches, under-provisioning, over-provisioning—it’s an endless supply of problems. Is serverless the answer?

Serverless computing has been growing in popularity in recent years. But how well does it actually solve the problem? First, let’s define it. In my own words, serverless computing abstracts away the functionality of the server so that developers like you and me don’t have to worry about their operation. You get to use a server without actually using a server manually. Technically, it’s not literally “serverless”, you just don’t have to worry about the servers you’re using. You just submit your code and the provider will ensure it runs when needed, regardless of volume. Auto-scaling and server management is no longer your concern.

On the surface, this sounds cool. You can simply write some code, bundle it as a Docker image, then deploy it as a serverless function through AWS Lambda or Google Cloud Run. All with a few clicks. No messy installs. No Linux command line. All you have to worry about is writing your application code.

But my problem with this is it doesn’t solve the original problem very effectively. Remember, the problem is developers don’t want to manage servers to deploy their code. And serverless is supposed to address that. In theory, it could. But in reality, few developers use serverless deployments. It’s just too intimidating.

For instance, when I was looking at deployment options for my trading bot microservices, I stumbled upon the serverless hype train. At the time, I was already setting up Kubernetes. Because I knew that was the industry standard for robust, scalable apps. Good enough for Uber, good enough for me. And for some reason, serverless sounded even more daunting.

I was like, “What do you mean serverless? No servers? How on earth does that work?” So I left it alone. I thought it was too complicated. In reality, a serverless deployment is probably much easier to set up than a Kubernetes cluster. Joke’s on me. But I don’t think I’m alone in this reaction. Serverless is intimidating for newcomers. Which is ironic, because those are the people it’s supposed to benefit most.

I mean, they should be teaching serverless in beginner programming courses. It’s so easy to deploy things this way. Just write some code, copy a generic Dockerfile from the internet, and deploy with a few clicks through a website. Far easier than deploying to some DigitalOcean droplet. And not to mention much more useful. Because little serverless apps can be used to easily extend the functionality of any application. And they’re trivial to develop. Yet all this upside is never discovered because this stuff scares people.

That’s the problem with serverless: It’s terrifying. Maybe it’s the content surrounding it. Most of the people talking about this are senior DevOps types at industry conferences. That creates a false image that it’s an advanced concept for the big boys. Which isn’t necessarily true. Or, maybe it’s an adoption problem. Maybe serverless solutions just need to be marketed to beginners better.

Be that as it may, serverless is pretty cool. I’ll be playing with it a lot more and keeping an eye on the space. Slowly but surely, some cloud functions will find their way into my production bots. And that’s when the real fun will begin.

~ February 27, 2021