Serverless computing has been hyped as the ultimate solution to scalability and elasticity, but many who adopted it were quickly disappointed by rising costs and maintenance difficulty. There may be no servers, but we still need a major effort to deploy and test workloads. Now that the hype is over, we face facts: Serverless is a specific tool, suitable only for very specific use cases.
The initial promises of serverless were undoubtedly enticing for any developer who deals with cloud-native applications. No more server patching, no more complicated deployments. Just deploy your application and see it running in minutes.
Over- (and under-) provisioning is now a thing of the past. Your serverless application will automatically scale up and down according to workload.
Serverless technology was also tightly integrated into CI/CD solutions. How can anybody say no to this new, magic technology? On the other hand, your costs will also go down, as you essentially pay only for what your application is using.
Serverless is the perfect paradigm. Developers can move fast and focus only on business logic code, while managers and chief financial officers can enjoy the lower costs compared to traditional cloud methodologies.
Of course, the reality is much different. Several teams jumped on the serverless bandwagon without ever understanding the implications of their choice.
There is no silver bullet. Every seasoned engineer knows that all choices come down to trade-offs and serverless is no different.
When Is Serverless Computing the Right Choice?
Is your application asynchronous, with many small individual components that can be tested in isolation? Does traffic come and go with non-predictable timings? Is your virtual machine just sitting there doing nothing on the weekends? In that case, serverless is definitely something that you should evaluate.
Serverless Only Solves a Few Problems for Developers
The main selling point of serverless for developers (after the scalability benefits) is the ease of deployment and lack of maintenance. Just wrap your business code in a function and deploy it in less than a minute. No need to create a server; install dependencies and deploy your code anymore.
This might be true, especially in the case of Function-as-a-Service providers, but suffers from a common fallacy. Server management is not what blocks developers from shipping fast. More precisely, it’s not the only thing that blocks them.
In any non-trivial application, teams have several other concerns apart from servers. Developers need to manage their database schemas, the migration changesets, the integration tests between several components, the security vulnerabilities of all dependencies, the communication protocols with other external services and so on.
So, what does serverless offer on these tasks? Absolutely nothing. You’re still on your own. All these concerns are still there even if you adopt serverless.
Seasoned developers know that what matters most is your data (and by extension your database/queue). Mutating your database schema and making sure that you have enough data for auditing is a constant struggle. Losing valuable data will instantly make your customers angry.
So yes, serverless might help you in some scenarios with your computing needs and your business logic, but everything else stays the same.
Serverless is like promising architects that this new paint will dry up in two minutes instead of two hours. Even if this is true, is waiting for paint to dry the main problem that architects have today?
Constraints Introduced by Adopting Serverless
Building a serverless application has a lot of limitations.
You need to use an approved programming language. You need to communicate with the other serverless components of your cloud provider. You need to account for the cold start issues. You must make sure that your code works within a specific time limit. You need to manage context and save data in a different way.
All these limitations can make application development difficult and cumbersome. Ideally, developer teams that adopt serverless should already have experience with microservices and know how to write asynchronous services with proper network calls. Writing a serverless application changes completely what developers know about the running context and how to handle network failures or network errors.
Adopting serverless implies that your team needs training on how to write asynchronous components.
Debugging serverless applications is a nightmare compared to traditional methods. Running a serverless application locally (all of its components) is next to impossible. Running integration tests against a group of components locally is challenging and time consuming. Quickly locating an issue in a serverless application is more difficult than a monolith. You’ll spend the time you saved from not managing servers on debugging your serverless applications.
When a production application has an issue, developer teams always follow a familiar sequence of steps. First, the team needs to find where the problem is. Then, they need to code a solution that fixes the problem. Finally, they need to deploy the fix.
Serverless applications make the third step easier, but the first two steps are much more difficult. Knowing that you can deploy your code quickly is not a big benefit if it takes too much time to understand exactly where the problem is in all your asynchronous components and network call stacks.
Do You Really Save Costs With Serverless?
If you read most of the tutorials about serverless applications, you might get the impression that all services in the world perform just two tasks: image resizing and video/audio encoding. There’s a reason why serverless proponents always use these examples. The benefits of serverless are only evident if your application is following some very specific patterns.
Is your application asynchronous, with many small individual components that can be tested in isolation? Does traffic come and go with non-predictable timings? Is your virtual machine just sitting there doing nothing on the weekends?
Serverless is definitely something that you should evaluate.
Several real-world applications don’t follow this kind of behavior. And for those, having a traditional virtual machine or container is a better solution. In fact, there’s the danger of actually paying more money if you port this kind of application to a serverless architecture.
Instant scalability comes with great responsibility. Several cloud vendors don’t offer by default any hard limits on how much capability your application uses. The benefits of serverless scalability are only true provided you have the budget to actually cover them.
Some of Us Can Go Back to Traditional Methods
So, now that the hype is over, what should you use instead? This is a tricky question, because the answer assumes that developers need to use only one development methodology. This could not be further from the truth: developers can mix and match according to the type of the application.
The main problem with the hype around serverless is that it was supposed to be the next way of application development. Serverless can be just another application architecture, you can still use the others. Developers should add serverless in their toolchest like any other tool. Understanding when to use serverless is equally important to understanding how to use serverless.
If your application matches certain requirements (bursty traffic, downtime between requests, asynchronous components), then serverless might be a perfect solution. But traditional development methods will be much easier (and more cost effective) in several other cases.
The big advantage of traditional applications is predictability in all fronts. A monolithic application is very easy to launch locally on a developer laptop. You can monitor it and test it as a whole without any specialized tools. Function calls within a monolith are perfect. They have zero latency, 100 percent reliability and easy traceability. Debugging a monolithic application is much faster than a serveless one and there are already several specialized debugging tools that allow you to understand what, where and how an issue appeared.
Developers are always more comfortable with synchronous applications. It is very easy to find the order of events and explain why something happened if there is a predictable order in all function calls and memory requests.
On the matter of cost, monoliths are better when you have a constant and predictable load. More importantly you can reason about your costs by simply multiplying the time a virtual machine is up with cost per minute of your cloud provider. It is very easy to lock this down (do not use VM autoscaling) or even change it on demand (preload some VM’s in advance of a big event). You have total control of the costs.