You know what? Serverless does include servers in actual and containers do not really contain, they run the microservice. Keeping yourself updated over the landscape of cloud computing is a full-time job, and sometimes we all are asked to make calls that may go against the popular logic.
The latest one of them is the face-off between Serverless vs Containerization. Contrary to the general belief, both of these technologies have good things in common
Do you want a modern architecture? They both have it. Do you want to reduce your time to market for a large scale application development? They both have it. Do you want scalability and agility? They both have it.
However, people say that serverless is a replacement for containers, but is it really? There are a lot of confusions and it is indeed hard to decide which one is best for you. But my dear friend, you are worthy of knowing that!
In this article, we’ll compare both these technologies and try to find out which one, if any, is better. What are the common and distinct qualities? Which one is suitable for what application cases? But first, let’s start with some background.
What is Containerization?
Containerization can be defined as an OS level virtualization method. This method lets you deploy and run distributed application without the need of an entire virtual machine for each application. Instead of that, multiple isolated systems aka ‘Containers’, run on a single control host and has access to the single kernel.
Containers are more efficient than VMs since they share the same OS kernel as the host. However, it requires separate OS instances as well.
Containers encapsulate the components required to run the desired software, for instance, files, environment variable and libraries. The host OS constraints the container’s accessibility to the physical resources, such as CPU and memory. This means a single container cannot consume all the physical resources of the host.
Containers simply make is simple for developers to know that their software will run without worrying about where it will be deployed. They also enable that we have commonly known as ‘microservices’.
This enables different teams to work on different parts of the application independently, as far as there is no major change in the way how they interact with each other. As a result, it is easier to develop a software and testing it faster for all the possible errors.
To manage all these containers, you are required to have another set of specialized software like Docker Swarm, Kubernetes, etc. This software helps you orchestrate these containers so that you can push them out to different machines while making sure that they run. Also, it empowers you to spin up more containers with each specific application whenever the demand arises.
What is Serverless Architecture?
The term serverless has become popular when Amazon launched AWS Lambda service in 2014. Since then the ball is rolling continuously and more and more people are adopting serverless day by day.
If you want to know A-Z of serverless architecture, read our Comprehensive Guide to Serverless Architecture.
Let us have a feature by feature comparison of Serverless vs Containers.
#1. Longevity Limitations
Serverless: As you may know, functions live very ‘short’. Here, short can be defined as 5 minutes or less. Functions are ephemeral, means, the container running the function may live for only once and die after its execution.
Shorter lifespan is one of the caveats of functions but it also provides agility which gives devs freedom and flexibility to push apps into the production which are easy to scale.
Containers: This is not the case with containers. Containers are always running and spinning and they do not die once their execution is done. This empowers them to leverage the benefit of caching, about which we will discuss later, but at the same time, scaling is not instantaneous.
#2. State persistency
Serverless: As discussed in the above point, functions are ephemeral or ‘short-lived’ which in turn, makes them stateless. The more stateless your functions are, the more ways there are to put them together and build something powerful.
The power of stateless computing lies in the ability to empower developers to write powerful, reusable functions and combine them. But it comes with the downside of caching. Since functions are stateless, you cannot cache anything for the further use, as a result, it faces high latency about which we will discuss in next point.
Containers: With containers, you can leverage the benefits of caching. To allow the data to be stored even when the containers are terminated, you’ll need to use a storage mechanism that will manage the data outside of the container. Anyways, why caching is so important?
If the objects on the object file that container is about to produce is same as the previous builds, reusing a cache from the previous result is a great time saver in terms of computing. This enables the building of new containers extremely fast.
#3. Latency & Startup Time
Serverless: Since functions are stateless, caching is unavailable and they don’t have copies of your functions running on standby, it results in higher invocation time. Functions stay warm, or your code is running while getting a command to be executed, for 15 minutes and die, hence, if you call it after the specified time, it will be a cold start.
As a result, you may face latency issues especially when there are concurrent users. To tackle this, you can add the following code to it to keep it warm all the time as mentioned in our previous blog, Serverless Performance: Challenges and Best Practices
However, this is temporary and can be used while you’re dealing with fewer functions. If you’re using it for all the functions, soon there will be a lot of dummy functions and then you won’t be able to manage it properly. However, the number of the function is quite small, it makes sense to use functions rather than spinning a whole container.
Containers: Considering the pre-serverless era of containers, these are always sitting there and all you have to do is send an HTTPS request to it and you get an instant response and low latency time. With caching at the advantage, containers can be spun fast with none of the files to be created again, as a reference to them is sufficient to locate and reuse the already built structures.
#4. Portability & Migration
Serverless: Let’s assume you’re already using many different services of AWS. If this is the case, then opting for Lambda functions would be extremely easy as it facilitates fast and accessible integration with its other services.
Even if it’s not the case and you fear vendor lock-in, what you can do is to make sure all API endpoints and URLs used in your code get mapped through a domain/change via DNS is under your control.
This gives you an option of cutting off a particular service or redirecting them to a different endpoint of your choice (eg. an another BaaS provider). This is better than hardcoding your code in an endpoint which is not under your control or is unalterable.
However, there are many FaaS providers and your concern for vendor lock-in is quite obvious. In case of Lambda, if it isn’t meeting your region specific requirements, here is what you can do. All Lambda handler code should be isolated and extremely thin shims to logic that is locked up in other modules/classes.
This increases reusability and in the event that a refactor is necessary to move out of Lambda, makes that work much easier and straightforward. This also facilitates unit testing. An example of a thin Lambda handler:
For more detail on this, refer to the source.
Speaking of migration, it is still hazy how to fit FaaS into the current DevOps framework. It might happen that your organisation will write hundreds of function and after a time, nobody knows what functionalities were included in which functions and how many of them are still in use.
Containers: If you’re opting for container-based microservices architecture, these provide great portability. The time and efforts to move the code from developer’s laptop to your internal datacenter or out to different cloud providers are minimal.
With the immense stress on innovation and time to market are getting lesser and lesser, opting for microservices empowers you to spin up new versions of your application.
Hence microservices sounds like a good option to get started if you’re moving from monoliths due to ease in migration and various option for technology stack for multiple containers.
However, running a container on a cloud platform has a huge amount of interdependencies. Such as upgrades need to be planned together which includes container hosts, container images, container engine and container orchestration.
For some legacy applications that you want to shift to microservices, ‘containerizing’ them might be an easy and cheap option than to re-architecting the whole application into functions.
#5. Language support
Serverless: The language supported by popular FaaS providers are quite limited which mainly includes Node.js, Python, Java, C# and Go (in case of AWS Lambda).
Containers: While containers empowers you with heterogeneous development environments and you can work on any technology stack you want. It might not sound like a huge benefit since developers these days are well-versed with multiple languages, but it is!
When you’ll be hiring for your new project, you won’t be required to take language into considerations for microservice architecture. Since microservices are independently deployable and scalable with each service providing a firm module boundary, services can be written in the language of any choice and can be managed by different teams.
#6. System control
Serverless: Dealing with functions doesn’t come off as a hard task as it eliminates the infrastructure complexity, as a result, you can focus more on developing your product and business outcomes. It significantly reduces the time to market which is not the case with containers.
However, with no responsibility of infrastructure management comes the dependency on service providers which includes fine grain control over security, managing and allocating resources and setting policies.
Containers: Plus, management of cluster configuration is a serious challenge as it prerequisites solid background in container technology. Talking about the control, microservices are comparatively easy to handle. Plus, orchestration framework like Kubernetes accelerates your governance and control over the architecture.
Container-based microservice architecture imparts you full control over the individual as well as the whole system. This enables you to set policies, allocate and manage resources. Plus, having fine grain control over the security and migration services.
With the full control over the container system comes the ability to see inside and outside of the containers. This allows comprehensive and effective testing & debugging using multiple environments and extensive resources. Whereas actual implementation and local testing of functions aren’t possible and hence it is hard to guess the performance issues.
#7. Resource heavy processing
Let’s take an example of AWS Lambda, if the function is taking more than 5 minutes, you’ll be mandated to dissect these tasks into smaller ones, plus, these are not the only limitations while working with them.
You can allocate the maximum of 1.5 GB of RAM for executing a single function, your deployment package should not exceed the maximum size of 50 MB. Whereas with containers, you can allocate the computing resources as per your application requirements.
When to Use Serverless?
Serverless architecture is perfect if your traffic pattern changes automatically. Not only it will be handled automatically, it will even shut down when there is no traffic at all. Talking about cost, you only pay for the time your function is running.
Developers need not worry about managing physical infrastructure. Write your code and just hit deploy! If there are any changes to the code, you can modify it instantly and ship it. Your developers don’t need to worry about where your code is running and how!
However, serverless is not sunny all the time. The technology is still in its infancy stage. Plus, there are some limitations with vendor support and ecosystem lock-in. You will need to adhere to the limitation provided by the serverless platform.
When to Use Containers?
Containers are great if you want to use the operating system of your own choice and leverage full control over the installed programming language and runtime version. Even if you want to use software with specific version requirements, containers are great to start with.
One should take notice of an interesting fact that it is possible to manage a huge fleet of containers with different technology stacks.
This is a great benefit if you’re thinking of migrating from an old, legacy system into a containerized system. Container orchestration tools like Kubernetes comes with already defined best practices which provide ease in managing large-scale container set-up.
However, containers require a lot of maintenance and come with the operational price tag. While using containers, you’ll split your legacy monolithic application into multiple microservices and spin them into individual groups of containers.
But containers will need to talk to each other. Also, you’ll need to keep updated the technology stack and security fixes for all the technology stacks that you’re opting for.
Container orchestration platforms can solve your issues with unpredictable traffic (auto-scaling), however, the process of spinning containers up or down won’t be instantaneous. Contrary to functions, containers will be running all the time and there will never be complete shutdown, which means runtime cost will never reduce to zero.
Which one should you choose? It looks like both have its own benefits. Both can be best or worse solution depending on the use case and there can’t be anything predefined.
If your existing application is big and it exists on-premise, you may first want to run in containers and then slowly move some of its parts towards functions. If you already have a microservice based application and you do not mind vendor lock-in, you can think of going serverless.
Serverless architecture definitely worths your attention especially due to its cost-effectiveness. As discussed in the introduction, we can’t really say that serverless means the death of containers. Not as of now. However, containers and serverless are both growing technology and it will be interesting to see what each one of them behold.
Am I missing something? Connect with me on Twitter @RohitAkiwatkar to start the conversation.